Jan 30 08:09:20 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 08:09:20 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:20 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 08:09:21 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 08:09:21 crc kubenswrapper[4870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:09:21 crc kubenswrapper[4870]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 08:09:21 crc kubenswrapper[4870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:09:21 crc kubenswrapper[4870]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:09:21 crc kubenswrapper[4870]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 08:09:21 crc kubenswrapper[4870]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.804182 4870 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.814696 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.815565 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.815606 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.815956 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.815995 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816014 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816027 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816040 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816053 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816065 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816076 4870 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816087 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816101 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816114 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816124 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816134 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816145 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816155 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816169 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816183 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816194 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816206 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816216 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816226 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816236 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816246 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816256 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816267 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816277 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816287 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816297 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816307 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816328 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816338 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816348 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816358 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816368 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816378 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816388 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816398 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816408 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816418 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816429 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816439 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816450 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816460 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816469 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816486 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816496 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816506 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816516 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816526 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816536 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816547 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816558 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816569 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816579 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816589 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816601 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816613 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816623 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816638 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816650 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816662 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816672 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816689 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816700 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816713 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816723 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816734 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.816745 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.817976 4870 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818012 4870 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818037 4870 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818053 4870 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818069 4870 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818082 4870 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818098 4870 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818123 4870 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818136 4870 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818150 4870 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818165 4870 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818178 4870 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818191 4870 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818205 4870 flags.go:64] FLAG: --cgroup-root="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818217 4870 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818229 4870 flags.go:64] FLAG: --client-ca-file="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818243 4870 flags.go:64] FLAG: --cloud-config="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818256 4870 flags.go:64] FLAG: --cloud-provider="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818268 4870 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818286 4870 flags.go:64] FLAG: --cluster-domain="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818298 4870 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818310 4870 flags.go:64] FLAG: --config-dir="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818322 4870 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818336 4870 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818352 4870 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818363 4870 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818376 4870 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818388 4870 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818400 4870 flags.go:64] FLAG: --contention-profiling="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818412 4870 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818423 4870 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818436 4870 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818447 4870 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818464 4870 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818476 4870 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818488 4870 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818499 4870 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818511 4870 flags.go:64] FLAG: --enable-server="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818523 4870 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818540 4870 flags.go:64] FLAG: --event-burst="100" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818552 4870 flags.go:64] FLAG: --event-qps="50" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818564 4870 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818575 4870 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818586 4870 flags.go:64] FLAG: --eviction-hard="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818601 4870 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818613 4870 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818625 4870 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818639 4870 flags.go:64] FLAG: --eviction-soft="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818653 4870 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818665 4870 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818677 4870 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818689 4870 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818701 4870 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818713 4870 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818724 4870 flags.go:64] FLAG: --feature-gates="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818740 4870 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818753 4870 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818765 4870 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818778 4870 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818789 4870 flags.go:64] FLAG: --healthz-port="10248" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818802 4870 flags.go:64] FLAG: --help="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818813 4870 flags.go:64] FLAG: --hostname-override="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818825 4870 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818837 4870 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818849 4870 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818860 4870 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818872 4870 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818921 4870 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818933 4870 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818944 4870 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818956 4870 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818968 4870 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818981 4870 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.818992 4870 flags.go:64] FLAG: --kube-reserved="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819004 4870 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819016 4870 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819029 4870 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819040 4870 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819051 4870 flags.go:64] FLAG: --lock-file="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819062 4870 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819075 4870 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819089 4870 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819109 4870 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819125 4870 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819137 4870 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819150 4870 flags.go:64] FLAG: --logging-format="text" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819162 4870 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819177 4870 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819189 4870 flags.go:64] FLAG: --manifest-url="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819201 4870 flags.go:64] FLAG: --manifest-url-header="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819218 4870 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819231 4870 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819246 4870 flags.go:64] FLAG: --max-pods="110" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819258 4870 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819269 4870 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819281 4870 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819292 4870 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819304 4870 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819317 4870 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819329 4870 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819363 4870 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819375 4870 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819387 4870 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819399 4870 flags.go:64] FLAG: --pod-cidr="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819410 4870 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819431 4870 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819443 4870 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819455 4870 flags.go:64] FLAG: --pods-per-core="0" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819467 4870 flags.go:64] FLAG: --port="10250" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819479 4870 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819492 4870 flags.go:64] FLAG: --provider-id="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819504 4870 flags.go:64] FLAG: --qos-reserved="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819518 4870 flags.go:64] FLAG: --read-only-port="10255" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819531 4870 flags.go:64] FLAG: --register-node="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819543 4870 flags.go:64] FLAG: --register-schedulable="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819555 4870 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819578 4870 flags.go:64] FLAG: --registry-burst="10" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819590 4870 flags.go:64] FLAG: --registry-qps="5" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819602 4870 flags.go:64] FLAG: --reserved-cpus="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819616 4870 flags.go:64] FLAG: --reserved-memory="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819631 4870 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819643 4870 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819656 4870 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819670 4870 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819681 4870 flags.go:64] FLAG: --runonce="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819693 4870 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819706 4870 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819719 4870 flags.go:64] FLAG: --seccomp-default="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819731 4870 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819743 4870 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819756 4870 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819768 4870 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819781 4870 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819793 4870 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819805 4870 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819816 4870 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819828 4870 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819840 4870 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819854 4870 flags.go:64] FLAG: --system-cgroups="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819866 4870 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819921 4870 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819933 4870 flags.go:64] FLAG: --tls-cert-file="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819944 4870 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819977 4870 flags.go:64] FLAG: --tls-min-version="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.819990 4870 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820003 4870 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820015 4870 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820027 4870 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820040 4870 flags.go:64] FLAG: --v="2" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820058 4870 flags.go:64] FLAG: --version="false" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820073 4870 flags.go:64] FLAG: --vmodule="" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820088 4870 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.820100 4870 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820399 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820418 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820452 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820465 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820479 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820491 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820503 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820514 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820525 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820534 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820545 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820556 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820565 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820576 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820586 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820596 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820606 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820616 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820626 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820636 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820646 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820656 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820666 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820676 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820686 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820696 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820707 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820721 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820734 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820744 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820762 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820774 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820785 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820795 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820805 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820814 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820824 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820836 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820866 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820919 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820933 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820945 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820957 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820967 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820978 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820988 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.820998 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821008 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821018 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821028 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821038 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821079 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821091 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821106 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821116 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821126 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821136 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821146 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821156 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821166 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821176 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821185 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821200 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821210 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821220 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821230 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821240 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821250 4870 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821260 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821270 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.821280 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.822383 4870 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.836194 4870 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.836245 4870 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836378 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836392 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836401 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836414 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836426 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836436 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836444 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836453 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836462 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836470 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836479 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836488 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836496 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836505 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836514 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836522 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836530 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836537 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836545 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836553 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836561 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836569 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836576 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836584 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836592 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836600 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836608 4870 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836616 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836623 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836631 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836639 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836647 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836654 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836664 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836678 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836694 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836713 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836723 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836735 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836746 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836756 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836766 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836780 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836792 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836802 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836811 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836819 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836831 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836841 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836849 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836858 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836866 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836917 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836938 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836949 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836960 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836968 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836976 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836985 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.836993 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837003 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837010 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837018 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837028 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837039 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837048 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837056 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837064 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837072 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837080 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837090 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.837104 4870 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837365 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837380 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837388 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837397 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837405 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837413 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837420 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837428 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837437 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837445 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837452 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837460 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837467 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837475 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837483 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837493 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837504 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837512 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837521 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837531 4870 feature_gate.go:330] unrecognized feature gate: Example Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837539 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837549 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837557 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837565 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837574 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837584 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837597 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837617 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837627 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837637 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837647 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837657 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837666 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837675 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837688 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837698 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837708 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837717 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837727 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837736 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837750 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837763 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837773 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837783 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837793 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837803 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837813 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837823 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837833 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837842 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837854 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837865 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837909 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837921 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837931 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837941 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837954 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837966 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837975 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837984 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.837994 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838005 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838015 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838026 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838035 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838045 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838055 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838064 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838073 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838082 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 08:09:21 crc kubenswrapper[4870]: W0130 08:09:21.838095 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.838113 4870 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.839824 4870 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.846463 4870 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.846619 4870 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.848397 4870 server.go:997] "Starting client certificate rotation" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.848446 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.850338 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-19 16:12:58.090096136 +0000 UTC Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.850510 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.887282 4870 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.889633 4870 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:09:21 crc kubenswrapper[4870]: E0130 08:09:21.891251 4870 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.912787 4870 log.go:25] "Validated CRI v1 runtime API" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.961411 4870 log.go:25] "Validated CRI v1 image API" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.963295 4870 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.969259 4870 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-08-04-20-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.969316 4870 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.993383 4870 manager.go:217] Machine: {Timestamp:2026-01-30 08:09:21.98989245 +0000 UTC m=+0.685439599 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7dbac932-0e54-4045-a1f0-fa334c8e1b7e BootID:42bb4058-de5f-47d3-b90e-bda57dd064e9 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:91:45:e9 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:91:45:e9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:41:a8:12 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:20:47:3a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5e:ae:0a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:97:33:fe Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:e9:cb:49:12:c2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:f3:7a:55:fe:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.993700 4870 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.993914 4870 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.995679 4870 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.996010 4870 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.996061 4870 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.996352 4870 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.996369 4870 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.997250 4870 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.997297 4870 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.998616 4870 state_mem.go:36] "Initialized new in-memory state store" Jan 30 08:09:21 crc kubenswrapper[4870]: I0130 08:09:21.998735 4870 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.002158 4870 kubelet.go:418] "Attempting to sync node with API server" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.002183 4870 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.002251 4870 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.002281 4870 kubelet.go:324] "Adding apiserver pod source" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.002301 4870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.006515 4870 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.007834 4870 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.008524 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.008613 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.008716 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.008796 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.010463 4870 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012473 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012522 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012548 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012590 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012614 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012627 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012645 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012666 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012702 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012721 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012757 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.012771 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.013938 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.014319 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.014648 4870 server.go:1280] "Started kubelet" Jan 30 08:09:22 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.020327 4870 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.020495 4870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.021109 4870 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.024671 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.024718 4870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.024748 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:52:45.735490502 +0000 UTC Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.025076 4870 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.025268 4870 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.025287 4870 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.025403 4870 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.026408 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.026470 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.028832 4870 factory.go:55] Registering systemd factory Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.028868 4870 factory.go:221] Registration of the systemd container factory successfully Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.029931 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="200ms" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.032919 4870 server.go:460] "Adding debug handlers to kubelet server" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.032095 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f73dac4603525 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:09:22.014598437 +0000 UTC m=+0.710145576,LastTimestamp:2026-01-30 08:09:22.014598437 +0000 UTC m=+0.710145576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.034466 4870 factory.go:153] Registering CRI-O factory Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.034498 4870 factory.go:221] Registration of the crio container factory successfully Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.034578 4870 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.034614 4870 factory.go:103] Registering Raw factory Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.034630 4870 manager.go:1196] Started watching for new ooms in manager Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.035338 4870 manager.go:319] Starting recovery of all containers Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.038617 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.038654 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.038665 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.038674 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.038684 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.038692 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040366 4870 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040387 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040398 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040411 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040420 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040430 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040438 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040450 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040461 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040470 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040479 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040488 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040496 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040507 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040515 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040525 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040536 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040548 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040557 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040566 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040574 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040584 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040614 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040624 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040632 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040641 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040651 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040660 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040669 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040677 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040686 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040695 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040704 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040713 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040722 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040731 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040740 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040748 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040758 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040767 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040776 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040786 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040812 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040820 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040829 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040843 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040852 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040898 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040914 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040927 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040941 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040954 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040966 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040977 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.040994 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041007 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041019 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041035 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041049 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041060 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041085 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041098 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041114 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041126 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041138 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041149 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041163 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041178 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041218 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041230 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041243 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041257 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041268 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041280 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041293 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041305 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041317 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041330 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041341 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041357 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041369 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041380 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041392 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041404 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041417 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041432 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041446 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041458 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041469 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041480 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041491 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041502 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041513 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041529 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041537 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041575 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041584 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041593 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041616 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041630 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041639 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041649 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041658 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041673 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041686 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041695 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041704 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041716 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041725 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041734 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041742 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041758 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041766 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041774 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041783 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041791 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041799 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041808 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041816 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041827 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041836 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041844 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041853 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041862 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041892 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041910 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041924 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041961 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041973 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041982 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.041992 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042004 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042013 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042022 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042030 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042040 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042048 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042057 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042066 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042075 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042087 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042098 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042107 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042120 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042129 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042138 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042146 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042159 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042169 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042177 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042186 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042195 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042207 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042216 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042224 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042233 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042241 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042254 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042262 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042272 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042280 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042289 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042298 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042307 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042316 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042324 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042333 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042342 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042354 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042362 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042371 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042379 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042387 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042396 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042409 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042420 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042429 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042437 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042448 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042456 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042467 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042477 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042485 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042496 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042504 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042513 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042521 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042535 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042543 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042553 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042561 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042569 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042581 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042590 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042599 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042607 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042616 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042624 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042633 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042641 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042650 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042658 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042667 4870 reconstruct.go:97] "Volume reconstruction finished" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.042674 4870 reconciler.go:26] "Reconciler: start to sync state" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.051366 4870 manager.go:324] Recovery completed Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.062499 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.064333 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.064376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.064389 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.065492 4870 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.065506 4870 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.065531 4870 state_mem.go:36] "Initialized new in-memory state store" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.068656 4870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.073086 4870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.073212 4870 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.073290 4870 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.073386 4870 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.076014 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.076110 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.080050 4870 policy_none.go:49] "None policy: Start" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.080708 4870 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.080735 4870 state_mem.go:35] "Initializing new in-memory state store" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.125945 4870 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.139982 4870 manager.go:334] "Starting Device Plugin manager" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140047 4870 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140065 4870 server.go:79] "Starting device plugin registration server" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140558 4870 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140580 4870 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140728 4870 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140948 4870 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.140966 4870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.147405 4870 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.173541 4870 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.173631 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.174761 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.174791 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.174799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.174905 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175185 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175225 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175925 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175935 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.175958 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176755 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176887 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176894 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176899 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.176902 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.177022 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.177047 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.177093 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.177849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.177913 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.177926 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.178212 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.178242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.178251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.178403 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.178559 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.178606 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179551 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179654 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179900 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.179926 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.180426 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.180481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.180495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.231049 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="400ms" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.240716 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.242249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.242309 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.242328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.242366 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.243125 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.227:6443: connect: connection refused" node="crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244581 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244639 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244678 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244711 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244851 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244920 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244955 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.244984 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245046 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245120 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245185 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245225 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245300 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245366 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.245400 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.346761 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.346836 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.346899 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.346932 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.346963 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.346990 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347003 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347069 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347072 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347137 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347018 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347148 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347190 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347088 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347076 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347223 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347246 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347245 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347274 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347310 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347354 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347377 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347397 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347418 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347436 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347493 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347507 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347524 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.347682 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.443830 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.445125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.445163 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.445173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.445200 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.445612 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.227:6443: connect: connection refused" node="crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.516156 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.534188 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.544032 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.563711 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.566646 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.567563 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1affebc35dae28dcd241b2f06b70f52f5ecf2267eb04e63eb59eed161581f6dc WatchSource:0}: Error finding container 1affebc35dae28dcd241b2f06b70f52f5ecf2267eb04e63eb59eed161581f6dc: Status 404 returned error can't find the container with id 1affebc35dae28dcd241b2f06b70f52f5ecf2267eb04e63eb59eed161581f6dc Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.573108 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5e388a87f7a55b9df07599741c664a4182aee780698c958d97fb31d51483841c WatchSource:0}: Error finding container 5e388a87f7a55b9df07599741c664a4182aee780698c958d97fb31d51483841c: Status 404 returned error can't find the container with id 5e388a87f7a55b9df07599741c664a4182aee780698c958d97fb31d51483841c Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.574423 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-05af0ac4aba97e96a52a9eb545c02aec606f709da00cc7629cf49728273954c6 WatchSource:0}: Error finding container 05af0ac4aba97e96a52a9eb545c02aec606f709da00cc7629cf49728273954c6: Status 404 returned error can't find the container with id 05af0ac4aba97e96a52a9eb545c02aec606f709da00cc7629cf49728273954c6 Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.584135 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d5839aa0654842c1099eb16eb3c04755be1b93c37aff624f9d47a89b6923fde3 WatchSource:0}: Error finding container d5839aa0654842c1099eb16eb3c04755be1b93c37aff624f9d47a89b6923fde3: Status 404 returned error can't find the container with id d5839aa0654842c1099eb16eb3c04755be1b93c37aff624f9d47a89b6923fde3 Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.591553 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-705ac72741ca37498e805dcc6db1a1312d0c51768752f267467145db9bb8144c WatchSource:0}: Error finding container 705ac72741ca37498e805dcc6db1a1312d0c51768752f267467145db9bb8144c: Status 404 returned error can't find the container with id 705ac72741ca37498e805dcc6db1a1312d0c51768752f267467145db9bb8144c Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.631791 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="800ms" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.846578 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.848160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.848249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.848271 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:22 crc kubenswrapper[4870]: I0130 08:09:22.848340 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.849335 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.227:6443: connect: connection refused" node="crc" Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.908914 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.909082 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:22 crc kubenswrapper[4870]: W0130 08:09:22.979919 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:22 crc kubenswrapper[4870]: E0130 08:09:22.980062 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.015496 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.025553 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:04:54.209445878 +0000 UTC Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.080758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5e388a87f7a55b9df07599741c664a4182aee780698c958d97fb31d51483841c"} Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.083487 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1affebc35dae28dcd241b2f06b70f52f5ecf2267eb04e63eb59eed161581f6dc"} Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.085094 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"705ac72741ca37498e805dcc6db1a1312d0c51768752f267467145db9bb8144c"} Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.086140 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5839aa0654842c1099eb16eb3c04755be1b93c37aff624f9d47a89b6923fde3"} Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.087216 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05af0ac4aba97e96a52a9eb545c02aec606f709da00cc7629cf49728273954c6"} Jan 30 08:09:23 crc kubenswrapper[4870]: W0130 08:09:23.169107 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:23 crc kubenswrapper[4870]: E0130 08:09:23.169364 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:23 crc kubenswrapper[4870]: E0130 08:09:23.433019 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="1.6s" Jan 30 08:09:23 crc kubenswrapper[4870]: W0130 08:09:23.532122 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:23 crc kubenswrapper[4870]: E0130 08:09:23.532234 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.649949 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.652309 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.652388 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.652409 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:23 crc kubenswrapper[4870]: I0130 08:09:23.652455 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:23 crc kubenswrapper[4870]: E0130 08:09:23.653306 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.227:6443: connect: connection refused" node="crc" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.015737 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.025748 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:00:03.122819448 +0000 UTC Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.028992 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 08:09:24 crc kubenswrapper[4870]: E0130 08:09:24.030050 4870 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.091850 4870 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d" exitCode=0 Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.091928 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.092106 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.093376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.093428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.093445 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.095201 4870 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="fe9b232957f2eea82ca2086063aa00fe190428df468751e40d205478af3ea9a1" exitCode=0 Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.095289 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.095380 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"fe9b232957f2eea82ca2086063aa00fe190428df468751e40d205478af3ea9a1"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.096568 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.096620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.096637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.099129 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.099226 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.099298 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.099387 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.099204 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.100468 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.100510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.100528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.101159 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e" exitCode=0 Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.101233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.101380 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.102538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.102587 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.102612 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.103200 4870 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03" exitCode=0 Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.103259 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03"} Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.103403 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.105270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.105301 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.105312 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.106844 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.108154 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.108227 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:24 crc kubenswrapper[4870]: I0130 08:09:24.108312 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.011917 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f73dac4603525 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:09:22.014598437 +0000 UTC m=+0.710145576,LastTimestamp:2026-01-30 08:09:22.014598437 +0000 UTC m=+0.710145576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.015688 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.026730 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:40:29.991810548 +0000 UTC Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.033710 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="3.2s" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.110295 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.110345 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.110358 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.110371 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.112495 4870 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9" exitCode=0 Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.112547 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.112580 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.113706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.113733 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.113743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.116218 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.116247 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.116272 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.116317 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.120242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.120286 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.120306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.128048 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.128417 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0b968ca52ac44d69f056bcc02f5e1b2ad03c9700bf54ec25d893c936a721f595"} Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.128482 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.129689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.129717 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.129727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.129978 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.130004 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.130020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:25 crc kubenswrapper[4870]: W0130 08:09:25.232134 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.232205 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.253998 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.255325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.255388 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.255398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:25 crc kubenswrapper[4870]: I0130 08:09:25.255428 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.255956 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.227:6443: connect: connection refused" node="crc" Jan 30 08:09:25 crc kubenswrapper[4870]: W0130 08:09:25.690443 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.690552 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:25 crc kubenswrapper[4870]: W0130 08:09:25.718437 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.718527 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:25 crc kubenswrapper[4870]: W0130 08:09:25.768923 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.227:6443: connect: connection refused Jan 30 08:09:25 crc kubenswrapper[4870]: E0130 08:09:25.769006 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.227:6443: connect: connection refused" logger="UnhandledError" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.027295 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:23:54.532041702 +0000 UTC Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.136816 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596"} Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.136969 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.138221 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.138268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.138283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.139524 4870 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd" exitCode=0 Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.139595 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.139619 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd"} Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.139642 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.139696 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.139599 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140354 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.140912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.141034 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:26 crc kubenswrapper[4870]: I0130 08:09:26.141151 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.028245 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:01:11.674842351 +0000 UTC Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.145993 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146466 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572"} Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146500 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973"} Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146510 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf"} Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146520 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e"} Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146571 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146598 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.146985 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.147000 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.147130 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.147161 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.147170 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.713962 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.714160 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.715533 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.715591 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:27 crc kubenswrapper[4870]: I0130 08:09:27.715614 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.029043 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:41:15.413907152 +0000 UTC Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.036503 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.080933 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.155833 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.155827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2"} Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.155945 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.156672 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.156742 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.156769 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.157341 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.157375 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.157384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.456636 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.457853 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.457916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.457929 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.457954 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.552628 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.562604 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.819352 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.819751 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.819822 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.821237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.821274 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:28 crc kubenswrapper[4870]: I0130 08:09:28.821290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.030079 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:37:07.309467741 +0000 UTC Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.158621 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.158762 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.159988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.160024 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.160036 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.160779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.160834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.160855 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.971696 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.971862 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.971967 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.973397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.973478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:29 crc kubenswrapper[4870]: I0130 08:09:29.973504 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.030336 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:45:18.3674693 +0000 UTC Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.161093 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.162234 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.162288 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.162311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.416062 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.416425 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.418191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.418236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.418254 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:30 crc kubenswrapper[4870]: I0130 08:09:30.807410 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.031427 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:10:28.634195557 +0000 UTC Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.164126 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.165284 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.165350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.165367 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.834538 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.834774 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.836552 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.836613 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:31 crc kubenswrapper[4870]: I0130 08:09:31.836634 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:32 crc kubenswrapper[4870]: I0130 08:09:32.032112 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:38:02.408900694 +0000 UTC Jan 30 08:09:32 crc kubenswrapper[4870]: E0130 08:09:32.147492 4870 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 08:09:32 crc kubenswrapper[4870]: I0130 08:09:32.170948 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:32 crc kubenswrapper[4870]: I0130 08:09:32.171122 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:32 crc kubenswrapper[4870]: I0130 08:09:32.172460 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:32 crc kubenswrapper[4870]: I0130 08:09:32.172511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:32 crc kubenswrapper[4870]: I0130 08:09:32.172530 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:33 crc kubenswrapper[4870]: I0130 08:09:33.033138 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:36:10.106358132 +0000 UTC Jan 30 08:09:34 crc kubenswrapper[4870]: I0130 08:09:34.034184 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:07:28.414489924 +0000 UTC Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.034941 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:11:55.209336684 +0000 UTC Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.171313 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.172032 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.883424 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.883734 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.891546 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 08:09:35 crc kubenswrapper[4870]: I0130 08:09:35.891830 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 08:09:36 crc kubenswrapper[4870]: I0130 08:09:36.035225 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:49:43.838486054 +0000 UTC Jan 30 08:09:37 crc kubenswrapper[4870]: I0130 08:09:37.035374 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:29:32.926271856 +0000 UTC Jan 30 08:09:38 crc kubenswrapper[4870]: I0130 08:09:38.036169 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:10:16.746444552 +0000 UTC Jan 30 08:09:38 crc kubenswrapper[4870]: I0130 08:09:38.043318 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:38 crc kubenswrapper[4870]: I0130 08:09:38.043678 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:38 crc kubenswrapper[4870]: I0130 08:09:38.045271 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:38 crc kubenswrapper[4870]: I0130 08:09:38.045444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:38 crc kubenswrapper[4870]: I0130 08:09:38.045597 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.037303 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:51:17.725661252 +0000 UTC Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.979562 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.979825 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.981692 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.981970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.982176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:39 crc kubenswrapper[4870]: I0130 08:09:39.987068 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.037862 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:03:01.27298832 +0000 UTC Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.188220 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.189497 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.189568 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.189594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.843559 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.844041 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.845560 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.845629 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.845666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:40 crc kubenswrapper[4870]: E0130 08:09:40.845726 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.849947 4870 trace.go:236] Trace[681849120]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:09:30.791) (total time: 10058ms): Jan 30 08:09:40 crc kubenswrapper[4870]: Trace[681849120]: ---"Objects listed" error: 10058ms (08:09:40.849) Jan 30 08:09:40 crc kubenswrapper[4870]: Trace[681849120]: [10.058516822s] [10.058516822s] END Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.850102 4870 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.850970 4870 trace.go:236] Trace[1249161597]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:09:29.080) (total time: 11770ms): Jan 30 08:09:40 crc kubenswrapper[4870]: Trace[1249161597]: ---"Objects listed" error: 11770ms (08:09:40.850) Jan 30 08:09:40 crc kubenswrapper[4870]: Trace[1249161597]: [11.770815313s] [11.770815313s] END Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.851018 4870 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.851856 4870 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.853315 4870 trace.go:236] Trace[1891258546]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:09:29.281) (total time: 11571ms): Jan 30 08:09:40 crc kubenswrapper[4870]: Trace[1891258546]: ---"Objects listed" error: 11571ms (08:09:40.853) Jan 30 08:09:40 crc kubenswrapper[4870]: Trace[1891258546]: [11.571399396s] [11.571399396s] END Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.853347 4870 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.855315 4870 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 08:09:40 crc kubenswrapper[4870]: E0130 08:09:40.855627 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.862041 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.879597 4870 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.904785 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52274->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.904848 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52274->192.168.126.11:17697: read: connection reset by peer" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.904948 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56072->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.904964 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56072->192.168.126.11:17697: read: connection reset by peer" Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.905154 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 08:09:40 crc kubenswrapper[4870]: I0130 08:09:40.905168 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.026800 4870 apiserver.go:52] "Watching apiserver" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029047 4870 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029238 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029538 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029580 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.029602 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029538 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029783 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.029806 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.029797 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.030111 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.030143 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032065 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032137 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032063 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032393 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032532 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032617 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032767 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.032969 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.033240 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.038188 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:18:15.68667703 +0000 UTC Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.065306 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.074662 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.082710 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.093723 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.103406 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.112220 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.126864 4870 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.139964 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157051 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157285 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157366 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157432 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157479 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157553 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157635 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157704 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157769 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157832 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157648 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157821 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157886 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.157937 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158002 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158126 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158202 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158277 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158343 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158422 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158521 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158614 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158133 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158409 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158432 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158467 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158904 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159014 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159115 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159207 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159299 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159014 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159033 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159046 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159140 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159283 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159412 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.158778 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159373 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159349 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159513 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159549 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159554 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159618 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159657 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159690 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159720 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159754 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159767 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159785 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159819 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159855 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159915 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159949 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159981 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.159985 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160006 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160019 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160061 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160083 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160096 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160114 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160131 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160196 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160209 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160236 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160274 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160291 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160299 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160349 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160366 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160376 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160435 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160461 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160485 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160498 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160508 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160533 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160541 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160555 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160577 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160600 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160624 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160645 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160668 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160692 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160715 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160737 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160762 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160784 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160807 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160831 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160855 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160905 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160931 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160957 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160553 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160982 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161005 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161031 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161055 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161081 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161105 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161129 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161153 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161175 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161224 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161272 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161296 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161319 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161340 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161363 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161387 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161413 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161436 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161460 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161483 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161531 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161555 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161581 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161608 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161629 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161653 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161677 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161700 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161722 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161744 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161793 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161895 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161945 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161970 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161992 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162014 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162039 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162062 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162086 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162111 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162134 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162156 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162177 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162200 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162222 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162244 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162266 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162287 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162309 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162331 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162353 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162375 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162399 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162423 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162446 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162469 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162492 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162514 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162537 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162559 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162582 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162605 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162629 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162651 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162673 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162697 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162718 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162741 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162765 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162788 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162810 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162836 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162860 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162902 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162926 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162949 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162973 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162996 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163020 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163042 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163096 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163122 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163146 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163170 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163194 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163217 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163239 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163261 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163286 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163309 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163332 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163356 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163382 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163407 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163431 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163457 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163481 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163506 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163530 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163554 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163578 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163608 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163640 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163663 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163687 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163712 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163737 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163763 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163788 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163812 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163835 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163860 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.164083 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.164113 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160579 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.164137 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160654 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160734 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160786 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160903 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160918 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160972 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.160983 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161040 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161104 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161236 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161283 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161348 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161406 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161447 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161466 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161533 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161563 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161574 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161639 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161821 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161810 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161822 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161820 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.161996 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162403 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.162967 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163150 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163338 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.163472 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.164012 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.164110 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.165173 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.165256 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.165344 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.165132 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.165760 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.166049 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.166136 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:09:41.666116782 +0000 UTC m=+20.361663971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.166491 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.166602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.166715 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.166840 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.164162 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.166942 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167121 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167214 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167263 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167299 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167335 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167370 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167418 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167458 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167495 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167529 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167564 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167638 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167682 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167750 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167800 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167839 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167900 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167948 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167985 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168020 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168053 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168089 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168160 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168201 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168237 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168279 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168374 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168430 4870 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168452 4870 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168473 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168492 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168511 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168533 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168552 4870 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168571 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168590 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168608 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168626 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168644 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167152 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168662 4870 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167285 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167356 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167437 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167479 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167871 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167900 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.167908 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168027 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168165 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168189 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168742 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168355 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168517 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168485 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168647 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168904 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169045 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169076 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169186 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169347 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169496 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169686 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169993 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.170240 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.170526 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.170605 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.170674 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.171139 4870 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169871 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172165 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172293 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172316 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172556 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172774 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172942 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.169793 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.168683 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.172997 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173009 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173023 4870 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173034 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173054 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173065 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173075 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173085 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173095 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173105 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173115 4870 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173110 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173127 4870 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173166 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173192 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173210 4870 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173229 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173248 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173686 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173722 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173744 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173761 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173780 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173798 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173815 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173833 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173853 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173869 4870 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173908 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173926 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173943 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173964 4870 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173984 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174003 4870 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174020 4870 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174037 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174054 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174071 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174088 4870 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174108 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174125 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174141 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174162 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174178 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174195 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174211 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174227 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174243 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174261 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174282 4870 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174301 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174319 4870 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174337 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174356 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173506 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173620 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173628 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174394 4870 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174421 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174579 4870 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174607 4870 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174625 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174643 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174660 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.173773 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174675 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174282 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174332 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174748 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.174831 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.181705 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.182071 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.182555 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.182902 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.183955 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.184784 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.185785 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186392 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186555 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186566 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186556 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186669 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186850 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186895 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.186966 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187371 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187533 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187541 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187705 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187125 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187553 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187894 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187913 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187939 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.187952 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.188227 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.188369 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.188676 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.188765 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:41.688726763 +0000 UTC m=+20.384273942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.189130 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.189178 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:41.689167967 +0000 UTC m=+20.384715166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.190696 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.194648 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.195771 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.196191 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.196281 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.196503 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.196752 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.196804 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.196914 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197132 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197193 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197341 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197489 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197607 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197636 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.197796 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.198192 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.198195 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.198286 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.198324 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.198608 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.198866 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.199824 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.199847 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.199860 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.199937 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:41.69991977 +0000 UTC m=+20.395466959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.200223 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.200706 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.200732 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.200745 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.200824 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:41.700789078 +0000 UTC m=+20.396336237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.201850 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.202154 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.202172 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.202555 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596" exitCode=255 Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.202628 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596"} Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.204224 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.204462 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.204482 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.204533 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.204792 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.205097 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.205602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.205640 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.206411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.206796 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.207116 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.207527 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.207837 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.208000 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.208429 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.208445 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.208616 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.208669 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.208849 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.210103 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.209225 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.209317 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.209741 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.211049 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.211698 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.215494 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.216971 4870 scope.go:117] "RemoveContainer" containerID="6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.218230 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.223023 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.225188 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.227563 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.231704 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.234113 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.237248 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.241900 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.249729 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.262350 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.274896 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275172 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275225 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275300 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275314 4870 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275327 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275339 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275345 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275351 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275386 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275417 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275428 4870 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275437 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275446 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275455 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275464 4870 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275490 4870 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275499 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275508 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275516 4870 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275524 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275532 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275540 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275566 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275576 4870 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275584 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275592 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275600 4870 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275609 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275617 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275626 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275652 4870 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275660 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275669 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275677 4870 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275684 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275692 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275700 4870 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275724 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275733 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275742 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275751 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275759 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275767 4870 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275774 4870 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275782 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275810 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275818 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275826 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275834 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275842 4870 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275850 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275858 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275894 4870 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275903 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275926 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275933 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275941 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275949 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275981 4870 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275989 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275997 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276007 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276014 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276023 4870 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276031 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276064 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276073 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276086 4870 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276096 4870 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276107 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276140 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276154 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276165 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276233 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276243 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276252 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276261 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276270 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276278 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276286 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276312 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276321 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276329 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276337 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276345 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276353 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276361 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276413 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276426 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276435 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276444 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276452 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276461 4870 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276469 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276496 4870 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276505 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276513 4870 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276521 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276529 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276538 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276545 4870 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276553 4870 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276581 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276589 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276598 4870 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276606 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276613 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276623 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276632 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276702 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276712 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276723 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276732 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276819 4870 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276827 4870 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276840 4870 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276847 4870 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276854 4870 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276862 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276896 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276907 4870 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.276915 4870 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.275385 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.345345 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.356106 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.360915 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:09:41 crc kubenswrapper[4870]: W0130 08:09:41.382938 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3b44f3a0c6ed8d542aae4cf4d599e6f89992d9651f4dda83cc8f819abc975355 WatchSource:0}: Error finding container 3b44f3a0c6ed8d542aae4cf4d599e6f89992d9651f4dda83cc8f819abc975355: Status 404 returned error can't find the container with id 3b44f3a0c6ed8d542aae4cf4d599e6f89992d9651f4dda83cc8f819abc975355 Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.682320 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.682488 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:09:42.682456581 +0000 UTC m=+21.378003690 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.783706 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.783741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.783756 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:41 crc kubenswrapper[4870]: I0130 08:09:41.783784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.783907 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.783976 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.783996 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:42.783975477 +0000 UTC m=+21.479522646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.783922 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784045 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784056 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784058 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:42.784029559 +0000 UTC m=+21.479576738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784057 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784099 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784104 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:42.784090511 +0000 UTC m=+21.479637620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784112 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:41 crc kubenswrapper[4870]: E0130 08:09:41.784161 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:42.784146363 +0000 UTC m=+21.479693472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.038718 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:14:12.302701093 +0000 UTC Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.077863 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.078522 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.079756 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.080470 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.081634 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.082161 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.082724 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.083604 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.084364 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.085339 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.085801 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.086834 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.087335 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.087836 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.088730 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.089570 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.090186 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.090553 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.091153 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.091697 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.092259 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.094076 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.094539 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.095633 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.096198 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.096872 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.098025 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.098115 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.098668 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.099783 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.100531 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.101625 4870 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.101829 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.103844 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.104578 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.105659 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.107585 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.108380 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.109357 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.110091 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.111203 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.111510 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.111735 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.112930 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.113549 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.114622 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.115198 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.116178 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.116782 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.117886 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.118432 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.119286 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.119785 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.120750 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.121350 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.121856 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.123832 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.137761 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.152952 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.165231 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.174040 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.175050 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.177378 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.191295 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.198687 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.206616 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e"} Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.206667 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715"} Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.206682 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3b44f3a0c6ed8d542aae4cf4d599e6f89992d9651f4dda83cc8f819abc975355"} Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.207575 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7"} Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.207603 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3c30a7b979634db9749c244abf7dcb6c275e2ad3259168538d5b6c5a75831f69"} Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.208558 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d21ebb539502ece16466d8085148cb62d450eaedc66f40648913f75628df180e"} Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.209305 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.210354 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.212264 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c"} Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.216367 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.217628 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.226051 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.239757 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.248777 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.256708 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.264750 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.277061 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.289650 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.310709 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.325818 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.338316 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.350601 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.369889 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.385548 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.399053 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.412679 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.424914 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.694715 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.694928 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:09:44.694903783 +0000 UTC m=+23.390450902 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.796142 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.796245 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.796287 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:42 crc kubenswrapper[4870]: I0130 08:09:42.796317 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796420 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796476 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796478 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796502 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796526 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796509 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:44.796489611 +0000 UTC m=+23.492036730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796590 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:44.796569254 +0000 UTC m=+23.492116413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796623 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:44.796612525 +0000 UTC m=+23.492159714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796676 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796705 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796727 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:42 crc kubenswrapper[4870]: E0130 08:09:42.796800 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:44.79677678 +0000 UTC m=+23.492323969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:43 crc kubenswrapper[4870]: I0130 08:09:43.039394 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:41:48.435506103 +0000 UTC Jan 30 08:09:43 crc kubenswrapper[4870]: I0130 08:09:43.074086 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:43 crc kubenswrapper[4870]: I0130 08:09:43.074152 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:43 crc kubenswrapper[4870]: I0130 08:09:43.074196 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:43 crc kubenswrapper[4870]: E0130 08:09:43.074597 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:43 crc kubenswrapper[4870]: E0130 08:09:43.074424 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:43 crc kubenswrapper[4870]: E0130 08:09:43.074669 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:43 crc kubenswrapper[4870]: I0130 08:09:43.214618 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.040808 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:24:41.352140521 +0000 UTC Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.219380 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae"} Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.239328 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.254958 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.274363 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.297634 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.326025 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.345769 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.362980 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.386850 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.403127 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.711265 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.711539 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:09:48.711501744 +0000 UTC m=+27.407048903 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.812839 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.812944 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.812995 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:44 crc kubenswrapper[4870]: I0130 08:09:44.813057 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813165 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813250 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:48.813229197 +0000 UTC m=+27.508776306 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813271 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813343 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813344 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813373 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813290 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813442 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:48.813429273 +0000 UTC m=+27.508976382 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813483 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:48.813449894 +0000 UTC m=+27.508997083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813499 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813536 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:44 crc kubenswrapper[4870]: E0130 08:09:44.813642 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:48.813606698 +0000 UTC m=+27.509153847 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:45 crc kubenswrapper[4870]: I0130 08:09:45.041294 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:06:09.912955788 +0000 UTC Jan 30 08:09:45 crc kubenswrapper[4870]: I0130 08:09:45.074571 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:45 crc kubenswrapper[4870]: I0130 08:09:45.074630 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:45 crc kubenswrapper[4870]: E0130 08:09:45.074680 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:45 crc kubenswrapper[4870]: I0130 08:09:45.074649 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:45 crc kubenswrapper[4870]: E0130 08:09:45.074809 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:45 crc kubenswrapper[4870]: E0130 08:09:45.074995 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:46 crc kubenswrapper[4870]: I0130 08:09:46.041429 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:06:46.117266408 +0000 UTC Jan 30 08:09:46 crc kubenswrapper[4870]: I0130 08:09:46.949926 4870 csr.go:261] certificate signing request csr-bt6jw is approved, waiting to be issued Jan 30 08:09:46 crc kubenswrapper[4870]: I0130 08:09:46.960329 4870 csr.go:257] certificate signing request csr-bt6jw is issued Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.042204 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:52:38.454768013 +0000 UTC Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.073779 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.073819 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.073921 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.073925 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.074030 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.074140 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.256482 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.259183 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.259243 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.259257 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.259344 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.271943 4870 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.272076 4870 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.273605 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.273659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.273674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.273693 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.273707 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.340360 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8kvt7"] Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.340846 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.345626 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.345832 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.347034 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.356967 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.363248 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.363295 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.363307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.363327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.363339 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.392371 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.412541 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.417813 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.417858 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.417867 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.418125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.418142 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.433155 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.438696 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.438934 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdvc\" (UniqueName: \"kubernetes.io/projected/1239efc2-d4e8-4a88-a0bf-00a685812999-kube-api-access-bzdvc\") pod \"node-resolver-8kvt7\" (UID: \"1239efc2-d4e8-4a88-a0bf-00a685812999\") " pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.439021 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1239efc2-d4e8-4a88-a0bf-00a685812999-hosts-file\") pod \"node-resolver-8kvt7\" (UID: \"1239efc2-d4e8-4a88-a0bf-00a685812999\") " pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.445841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.445904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.445917 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.445935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.445949 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.448402 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.455980 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.459052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.459089 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.459096 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.459110 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.459120 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.461992 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.471692 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: E0130 08:09:47.471853 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.473816 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.473856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.473867 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.473905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.473920 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.476513 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.495491 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.507262 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.520563 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.536040 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.540367 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdvc\" (UniqueName: \"kubernetes.io/projected/1239efc2-d4e8-4a88-a0bf-00a685812999-kube-api-access-bzdvc\") pod \"node-resolver-8kvt7\" (UID: \"1239efc2-d4e8-4a88-a0bf-00a685812999\") " pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.540411 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1239efc2-d4e8-4a88-a0bf-00a685812999-hosts-file\") pod \"node-resolver-8kvt7\" (UID: \"1239efc2-d4e8-4a88-a0bf-00a685812999\") " pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.540515 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1239efc2-d4e8-4a88-a0bf-00a685812999-hosts-file\") pod \"node-resolver-8kvt7\" (UID: \"1239efc2-d4e8-4a88-a0bf-00a685812999\") " pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.554764 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.557902 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdvc\" (UniqueName: \"kubernetes.io/projected/1239efc2-d4e8-4a88-a0bf-00a685812999-kube-api-access-bzdvc\") pod \"node-resolver-8kvt7\" (UID: \"1239efc2-d4e8-4a88-a0bf-00a685812999\") " pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.576259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.576315 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.576327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.576351 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.576362 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.654640 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8kvt7" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.679802 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.680062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.680149 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.680227 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.680307 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.775184 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j4sd8"] Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.775464 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rrkfz"] Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.775929 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hsmrb"] Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.776171 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.776512 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.776713 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.781663 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.782015 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.782032 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.782221 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.782363 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.782700 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.782918 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.786402 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.786422 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.786510 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.786581 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.786605 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.793946 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.793983 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.793994 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.794015 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.794027 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.801766 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.813803 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.828411 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.849659 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.881962 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.895995 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.896024 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.896032 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.896044 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.896052 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.897619 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.907855 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.917909 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.926125 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.938029 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.945237 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-etc-kubernetes\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.945385 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-hostroot\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.945504 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cnibin\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.945618 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-proxy-tls\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.945731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-os-release\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.945861 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-netns\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946003 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-cni-multus\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946108 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-system-cni-dir\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946208 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-cnibin\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946317 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp4df\" (UniqueName: \"kubernetes.io/projected/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-kube-api-access-hp4df\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946354 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwsvv\" (UniqueName: \"kubernetes.io/projected/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-kube-api-access-jwsvv\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946375 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8e9e25-2b9b-4820-8282-48e1d930a721-cni-binary-copy\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946395 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-daemon-config\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946425 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946445 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-system-cni-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946463 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-cni-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946484 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cni-binary-copy\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946505 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946526 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-socket-dir-parent\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946545 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-multus-certs\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946565 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-os-release\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-cni-bin\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946606 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-kubelet\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946628 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-rootfs\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946664 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-k8s-cni-cncf-io\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946688 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-mcd-auth-proxy-config\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946709 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-conf-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.946731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mcd\" (UniqueName: \"kubernetes.io/projected/3e8e9e25-2b9b-4820-8282-48e1d930a721-kube-api-access-k7mcd\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.950562 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.961547 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 08:04:46 +0000 UTC, rotation deadline is 2026-11-30 12:25:13.515157138 +0000 UTC Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.961749 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7300h15m25.553411431s for next certificate rotation Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.971777 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.993762 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:47Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.998056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.998109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.998125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.998148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:47 crc kubenswrapper[4870]: I0130 08:09:47.998164 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:47Z","lastTransitionTime":"2026-01-30T08:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.015302 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.031452 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.042935 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:02:21.699214919 +0000 UTC Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047308 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp4df\" (UniqueName: \"kubernetes.io/projected/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-kube-api-access-hp4df\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047338 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwsvv\" (UniqueName: \"kubernetes.io/projected/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-kube-api-access-jwsvv\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8e9e25-2b9b-4820-8282-48e1d930a721-cni-binary-copy\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047374 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-daemon-config\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047390 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047405 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-system-cni-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047419 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-cni-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047436 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cni-binary-copy\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047453 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047469 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-socket-dir-parent\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047485 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-multus-certs\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047522 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-os-release\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047537 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-cni-bin\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047551 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-kubelet\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047570 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-rootfs\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047594 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-k8s-cni-cncf-io\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047610 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-mcd-auth-proxy-config\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047624 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-conf-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047640 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mcd\" (UniqueName: \"kubernetes.io/projected/3e8e9e25-2b9b-4820-8282-48e1d930a721-kube-api-access-k7mcd\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047655 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-etc-kubernetes\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047675 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-hostroot\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047688 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cnibin\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047721 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-proxy-tls\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-os-release\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047761 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-netns\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047774 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-cni-multus\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047772 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-os-release\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047791 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-system-cni-dir\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047844 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-cni-bin\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-cnibin\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047892 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-kubelet\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047938 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-conf-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048107 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8e9e25-2b9b-4820-8282-48e1d930a721-cni-binary-copy\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047297 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.047825 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-system-cni-dir\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048340 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-netns\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048363 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-os-release\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048570 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-hostroot\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048586 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-etc-kubernetes\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048601 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-rootfs\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048616 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cnibin\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048635 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-cnibin\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048652 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-var-lib-cni-multus\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048666 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-k8s-cni-cncf-io\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048701 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-cni-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048805 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-mcd-auth-proxy-config\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.048958 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-system-cni-dir\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.049009 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-socket-dir-parent\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.049100 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8e9e25-2b9b-4820-8282-48e1d930a721-multus-daemon-config\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.049103 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e9e25-2b9b-4820-8282-48e1d930a721-host-run-multus-certs\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.049175 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.049204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.049638 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-cni-binary-copy\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.052498 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-proxy-tls\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.061336 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.063053 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwsvv\" (UniqueName: \"kubernetes.io/projected/8bdd7f5e-1187-4760-b2dc-98c3d3286f05-kube-api-access-jwsvv\") pod \"multus-additional-cni-plugins-rrkfz\" (UID: \"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\") " pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.069422 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mcd\" (UniqueName: \"kubernetes.io/projected/3e8e9e25-2b9b-4820-8282-48e1d930a721-kube-api-access-k7mcd\") pod \"multus-hsmrb\" (UID: \"3e8e9e25-2b9b-4820-8282-48e1d930a721\") " pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.073057 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp4df\" (UniqueName: \"kubernetes.io/projected/5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d-kube-api-access-hp4df\") pod \"machine-config-daemon-j4sd8\" (UID: \"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\") " pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.073207 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.086971 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.095038 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hsmrb" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.100630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.100665 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.100676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.100692 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.100704 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: W0130 08:09:48.105834 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8e9e25_2b9b_4820_8282_48e1d930a721.slice/crio-c933778356f15b59a6baac7e8f424762da44bd2e5b2a9ec9b779401b6738da10 WatchSource:0}: Error finding container c933778356f15b59a6baac7e8f424762da44bd2e5b2a9ec9b779401b6738da10: Status 404 returned error can't find the container with id c933778356f15b59a6baac7e8f424762da44bd2e5b2a9ec9b779401b6738da10 Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.109149 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.120770 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.141652 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.152959 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj5db"] Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.153854 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.156237 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.156681 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.156845 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.156993 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.156992 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.157097 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.157181 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.157837 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.169610 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.172214 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.180137 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.182953 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: W0130 08:09:48.183727 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d3c8db6_cf22_4fb2_ae7c_a3d544473a6d.slice/crio-fa286fd8c7032ffeaac6381a2f5eba18216372d858ee8acb5cf5ec3f949c8689 WatchSource:0}: Error finding container fa286fd8c7032ffeaac6381a2f5eba18216372d858ee8acb5cf5ec3f949c8689: Status 404 returned error can't find the container with id fa286fd8c7032ffeaac6381a2f5eba18216372d858ee8acb5cf5ec3f949c8689 Jan 30 08:09:48 crc kubenswrapper[4870]: W0130 08:09:48.191240 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bdd7f5e_1187_4760_b2dc_98c3d3286f05.slice/crio-d67af729ffbb403137a43034324451b234a181eb60b2891191e6ddfad4b7cdb9 WatchSource:0}: Error finding container d67af729ffbb403137a43034324451b234a181eb60b2891191e6ddfad4b7cdb9: Status 404 returned error can't find the container with id d67af729ffbb403137a43034324451b234a181eb60b2891191e6ddfad4b7cdb9 Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.196377 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.202787 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.202808 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.202815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.202828 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.202837 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.219173 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.233835 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerStarted","Data":"d67af729ffbb403137a43034324451b234a181eb60b2891191e6ddfad4b7cdb9"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.234750 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"fa286fd8c7032ffeaac6381a2f5eba18216372d858ee8acb5cf5ec3f949c8689"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.236599 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerStarted","Data":"f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.236748 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerStarted","Data":"c933778356f15b59a6baac7e8f424762da44bd2e5b2a9ec9b779401b6738da10"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.236730 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.237832 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8kvt7" event={"ID":"1239efc2-d4e8-4a88-a0bf-00a685812999","Type":"ContainerStarted","Data":"82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.237866 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8kvt7" event={"ID":"1239efc2-d4e8-4a88-a0bf-00a685812999","Type":"ContainerStarted","Data":"06e590ba7580dcd49d52849d81c5649ecddac56b8bf02da40b3d5d3017c53a04"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.250844 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-ovn-kubernetes\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251046 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-netd\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251205 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-var-lib-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251344 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-ovn\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251461 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-node-log\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251572 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-slash\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251656 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-script-lib\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251727 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-log-socket\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251801 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-bin\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.251898 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-config\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252101 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-env-overrides\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252181 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk5ps\" (UniqueName: \"kubernetes.io/projected/36037609-52f9-4c09-8beb-6d35a039347b-kube-api-access-pk5ps\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252279 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-systemd-units\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252352 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-kubelet\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252430 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36037609-52f9-4c09-8beb-6d35a039347b-ovn-node-metrics-cert\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252521 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-etc-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252598 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252690 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252794 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-netns\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.252904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-systemd\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.253194 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.270608 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.287188 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.299041 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.305076 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.305120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.305132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.305149 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.305161 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.311054 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.323420 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.349511 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354226 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-node-log\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354278 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-slash\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354302 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-script-lib\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354325 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-log-socket\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354346 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-bin\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354365 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-config\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-systemd-units\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354427 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-env-overrides\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354450 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk5ps\" (UniqueName: \"kubernetes.io/projected/36037609-52f9-4c09-8beb-6d35a039347b-kube-api-access-pk5ps\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354475 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-kubelet\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354498 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-etc-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354511 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-log-socket\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354521 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354541 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36037609-52f9-4c09-8beb-6d35a039347b-ovn-node-metrics-cert\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354561 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-node-log\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354587 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-netns\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354595 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-slash\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354611 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-systemd\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354632 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354693 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-ovn-kubernetes\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354728 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-netd\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354751 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-var-lib-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354781 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-ovn\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.354835 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-bin\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355366 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-script-lib\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355397 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-systemd-units\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355366 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-config\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355767 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-env-overrides\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355901 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-systemd\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355949 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-kubelet\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355967 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-netd\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.355983 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-etc-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.356017 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.356044 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-var-lib-openvswitch\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.356055 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.356373 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-ovn\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.356984 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-netns\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.357027 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-ovn-kubernetes\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.361330 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36037609-52f9-4c09-8beb-6d35a039347b-ovn-node-metrics-cert\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.368401 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.377852 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk5ps\" (UniqueName: \"kubernetes.io/projected/36037609-52f9-4c09-8beb-6d35a039347b-kube-api-access-pk5ps\") pod \"ovnkube-node-cj5db\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.379475 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.389646 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.400868 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.408242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.408290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.408300 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.408313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.408321 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.415094 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.430300 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.443475 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.456094 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.464156 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.468943 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: W0130 08:09:48.475430 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36037609_52f9_4c09_8beb_6d35a039347b.slice/crio-cb12165531731a176212e5ceb871fbc54aec2538e3ad27d93d5c0438cf177aa7 WatchSource:0}: Error finding container cb12165531731a176212e5ceb871fbc54aec2538e3ad27d93d5c0438cf177aa7: Status 404 returned error can't find the container with id cb12165531731a176212e5ceb871fbc54aec2538e3ad27d93d5c0438cf177aa7 Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.483181 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.496305 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.507453 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.510106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.510146 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.510156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.510172 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.510182 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.518546 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.538437 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.550210 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.567312 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.580113 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.595553 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.606689 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:48Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.612449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.612472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.612481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.612495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.612504 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.714953 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.714997 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.715010 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.715025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.715036 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.758730 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.758837 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:09:56.758814464 +0000 UTC m=+35.454361573 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.817847 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.817962 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.817982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.818005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.818017 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.859738 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.859807 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.859831 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.859852 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.859946 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.859978 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860003 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860016 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860085 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860019 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:56.860002639 +0000 UTC m=+35.555549748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.859984 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860121 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:56.860101372 +0000 UTC m=+35.555648551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860136 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860140 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:56.860130653 +0000 UTC m=+35.555677762 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860156 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:48 crc kubenswrapper[4870]: E0130 08:09:48.860221 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:09:56.860204856 +0000 UTC m=+35.555752065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.920649 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.920706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.920718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.920738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:48 crc kubenswrapper[4870]: I0130 08:09:48.920751 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:48Z","lastTransitionTime":"2026-01-30T08:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.027684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.027738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.027752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.027774 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.027786 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.043849 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:16:16.586843725 +0000 UTC Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.074166 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.074232 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.074177 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:49 crc kubenswrapper[4870]: E0130 08:09:49.074357 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:49 crc kubenswrapper[4870]: E0130 08:09:49.074521 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:49 crc kubenswrapper[4870]: E0130 08:09:49.074726 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.130426 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.130462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.130471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.130487 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.130496 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.233269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.233339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.233352 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.233376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.233391 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.243593 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.243667 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.249563 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f" exitCode=0 Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.249647 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.249719 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"cb12165531731a176212e5ceb871fbc54aec2538e3ad27d93d5c0438cf177aa7"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.251828 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bdd7f5e-1187-4760-b2dc-98c3d3286f05" containerID="0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7" exitCode=0 Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.253149 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerDied","Data":"0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.264675 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.294706 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.319038 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.334794 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.336147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.336197 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.336208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.336224 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.336236 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.347354 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.358195 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.372155 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.387998 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.409636 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.434768 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.439598 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.439662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.439675 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.439694 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.439704 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.451315 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.465923 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.478319 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.492328 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.505305 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.516023 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.537236 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.541767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.541811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.541829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.541846 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.541855 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.550170 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.563013 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.573313 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.589898 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.617843 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.631041 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.644773 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.645128 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.645198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.645274 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.645341 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.646376 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.660657 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.671245 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.683141 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.693074 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.747744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.747956 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.748041 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.748113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.748179 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.852438 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.852482 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.852494 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.852517 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.852529 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.958182 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.958229 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.958240 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.958294 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:49 crc kubenswrapper[4870]: I0130 08:09:49.958306 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:49Z","lastTransitionTime":"2026-01-30T08:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.044924 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:07:29.764998287 +0000 UTC Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.061263 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.061302 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.061314 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.061330 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.061343 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.163963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.164000 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.164009 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.164025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.164034 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.257262 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.257304 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.257325 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.257336 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.257348 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.260116 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerStarted","Data":"d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.266379 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.266518 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.266589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.266667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.266740 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.285049 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.299935 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.313265 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.326416 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.340919 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.357510 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.369589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.369645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.369656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.369673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.369694 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.374637 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.396408 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dpj7j"] Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.396769 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.398713 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.399125 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.399137 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.399148 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.407370 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.421086 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.435401 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.448497 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.463426 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.472018 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.472047 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.472058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.472073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.472092 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.481148 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.494663 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.512110 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.527398 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.540970 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.555704 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.567202 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.574609 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.574647 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.574655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.574670 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.574680 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.575583 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/228f8bf9-7e75-4886-8441-57bc0d251413-host\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.575826 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/228f8bf9-7e75-4886-8441-57bc0d251413-serviceca\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.576051 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9dtx\" (UniqueName: \"kubernetes.io/projected/228f8bf9-7e75-4886-8441-57bc0d251413-kube-api-access-x9dtx\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.579501 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.594728 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.606990 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.624589 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.638649 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.654328 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.676910 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/228f8bf9-7e75-4886-8441-57bc0d251413-host\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.676960 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/228f8bf9-7e75-4886-8441-57bc0d251413-serviceca\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677004 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9dtx\" (UniqueName: \"kubernetes.io/projected/228f8bf9-7e75-4886-8441-57bc0d251413-kube-api-access-x9dtx\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677004 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/228f8bf9-7e75-4886-8441-57bc0d251413-host\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677118 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.677136 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.678736 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/228f8bf9-7e75-4886-8441-57bc0d251413-serviceca\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.691507 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.719988 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9dtx\" (UniqueName: \"kubernetes.io/projected/228f8bf9-7e75-4886-8441-57bc0d251413-kube-api-access-x9dtx\") pod \"node-ca-dpj7j\" (UID: \"228f8bf9-7e75-4886-8441-57bc0d251413\") " pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.754760 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.780713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.780749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.780761 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.780776 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.780788 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.790401 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.828822 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:50Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.882817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.882865 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.882921 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.882938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.882951 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.986811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.986868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.986913 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.986940 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:50 crc kubenswrapper[4870]: I0130 08:09:50.986958 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:50Z","lastTransitionTime":"2026-01-30T08:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.010033 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dpj7j" Jan 30 08:09:51 crc kubenswrapper[4870]: W0130 08:09:51.029540 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228f8bf9_7e75_4886_8441_57bc0d251413.slice/crio-59a39a83ea2a993841daf6a190fb3bb2d1d7c07fc4b1de61267f753f3336c0ad WatchSource:0}: Error finding container 59a39a83ea2a993841daf6a190fb3bb2d1d7c07fc4b1de61267f753f3336c0ad: Status 404 returned error can't find the container with id 59a39a83ea2a993841daf6a190fb3bb2d1d7c07fc4b1de61267f753f3336c0ad Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.045377 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:35:27.32303976 +0000 UTC Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.074024 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.074108 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:51 crc kubenswrapper[4870]: E0130 08:09:51.074155 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.074185 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:51 crc kubenswrapper[4870]: E0130 08:09:51.074313 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:51 crc kubenswrapper[4870]: E0130 08:09:51.074408 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.090796 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.090840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.090851 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.090866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.090905 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.193799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.193831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.193844 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.193863 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.193883 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.264350 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dpj7j" event={"ID":"228f8bf9-7e75-4886-8441-57bc0d251413","Type":"ContainerStarted","Data":"59a39a83ea2a993841daf6a190fb3bb2d1d7c07fc4b1de61267f753f3336c0ad"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.268315 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bdd7f5e-1187-4760-b2dc-98c3d3286f05" containerID="d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f" exitCode=0 Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.268383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerDied","Data":"d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.275084 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.287181 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.295662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.295696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.295707 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.295723 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.295734 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.300872 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.315902 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.329857 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.344408 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.356421 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.383080 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.396178 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.398064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.398090 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.398099 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.398132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.398160 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.409790 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.422710 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.434770 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.448779 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.459049 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.484453 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.497562 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.500786 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.500824 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.500831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.500847 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.500857 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.604590 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.604633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.604643 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.604660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.604672 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.707845 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.707907 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.707918 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.707936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.707947 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.811378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.811439 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.811453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.811479 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.811497 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.841745 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.849477 4870 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.854133 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rrkfz/status\": read tcp 38.129.56.227:58122->38.129.56.227:6443: use of closed network connection" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.894399 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.907405 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.916285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.916327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.916338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.916352 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.916363 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:51Z","lastTransitionTime":"2026-01-30T08:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.921158 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.935105 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.946243 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.958034 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.970672 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:51 crc kubenswrapper[4870]: I0130 08:09:51.989632 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.000513 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.011989 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.018733 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.018760 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.018769 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.018781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.018791 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.021760 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.032684 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.041618 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.046389 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:17:20.036425753 +0000 UTC Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.049488 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.088263 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.121446 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.121506 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.121525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.121549 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.121569 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.127339 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.155135 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.195764 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.224088 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.224122 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.224131 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.224144 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.224155 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.239556 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.272436 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.282195 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bdd7f5e-1187-4760-b2dc-98c3d3286f05" containerID="1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63" exitCode=0 Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.282259 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerDied","Data":"1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.285197 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dpj7j" event={"ID":"228f8bf9-7e75-4886-8441-57bc0d251413","Type":"ContainerStarted","Data":"872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.312239 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.326511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.326705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.326791 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.326859 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.326945 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.353913 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.436793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.436837 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.436850 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.436868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.436900 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.442767 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.454170 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.472259 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.516486 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.539407 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.539442 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.539450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.539464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.539473 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.551351 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.592248 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.628761 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.641525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.641564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.641574 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.641589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.641600 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.672325 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.715498 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.744414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.744468 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.744483 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.744501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.744513 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.750529 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.793710 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.832290 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.846413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.846461 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.846477 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.846501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.846516 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.870128 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.919621 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.949674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.949731 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.949745 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.949767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.949782 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:52Z","lastTransitionTime":"2026-01-30T08:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.956185 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:52 crc kubenswrapper[4870]: I0130 08:09:52.997943 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:52Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.032668 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.046548 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:30:11.475967264 +0000 UTC Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.052551 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.052962 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.052985 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.053009 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.053027 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.074279 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:53 crc kubenswrapper[4870]: E0130 08:09:53.074409 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.074724 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:53 crc kubenswrapper[4870]: E0130 08:09:53.074787 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.074826 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:53 crc kubenswrapper[4870]: E0130 08:09:53.074864 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.078948 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.112449 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.153649 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.155116 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.155160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.155174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.155191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.155204 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.195998 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.231102 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.257753 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.257809 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.257841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.257869 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.257899 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.293675 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bdd7f5e-1187-4760-b2dc-98c3d3286f05" containerID="199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e" exitCode=0 Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.293741 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerDied","Data":"199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.314170 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.318186 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.341427 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.357161 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.362654 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.362680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.362688 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.362701 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.362709 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.393216 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.431862 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.465832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.465859 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.465866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.465892 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.465901 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.470982 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.518141 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.550654 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.568538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.568565 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.568575 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.568603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.568612 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.602645 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.635145 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.675702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.675738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.675746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.675759 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.675769 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.687343 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.710836 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.750091 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.778793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.778834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.778865 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.779114 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.779144 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.791070 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.828779 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:53Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.881783 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.881841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.881856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.881892 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.881905 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.984635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.984692 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.984707 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.984723 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:53 crc kubenswrapper[4870]: I0130 08:09:53.984735 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:53Z","lastTransitionTime":"2026-01-30T08:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.046815 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:35:59.965153137 +0000 UTC Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.086696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.086732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.086743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.086756 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.086770 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.189000 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.189030 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.189038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.189050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.189060 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.292241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.292295 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.292311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.292329 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.292340 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.319374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerStarted","Data":"05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.337081 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.357680 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.371051 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.385263 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.395252 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.395297 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.395339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.395360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.395373 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.397750 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.408120 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.420535 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.431467 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.449519 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.464525 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.476049 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.485441 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.497912 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.498373 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.498423 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.498439 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.498457 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.498472 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.512450 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.527615 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:54Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.601335 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.601596 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.601605 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.601620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.601629 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.704301 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.704434 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.704499 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.704577 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.705682 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.809103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.809168 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.809191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.809223 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.809263 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.910875 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.910938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.910952 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.910970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:54 crc kubenswrapper[4870]: I0130 08:09:54.910981 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:54Z","lastTransitionTime":"2026-01-30T08:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.013079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.013130 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.013142 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.013159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.013171 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.047752 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:11:35.934798055 +0000 UTC Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.073572 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.073623 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.073614 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:55 crc kubenswrapper[4870]: E0130 08:09:55.073735 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:55 crc kubenswrapper[4870]: E0130 08:09:55.073997 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:55 crc kubenswrapper[4870]: E0130 08:09:55.074165 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.116135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.116175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.116186 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.116202 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.116213 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.219138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.219212 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.219230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.219267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.219288 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.322588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.322662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.322685 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.322715 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.322738 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.336415 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.336798 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.343688 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bdd7f5e-1187-4760-b2dc-98c3d3286f05" containerID="05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f" exitCode=0 Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.343753 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerDied","Data":"05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.360358 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.377076 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.390333 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.391207 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.404449 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.420592 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.426070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.426102 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.426114 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.426131 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.426142 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.433036 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.443593 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.465598 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.482709 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.494635 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.507723 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.520200 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.529379 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.529536 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.529617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.529701 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.529787 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.532352 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.542821 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.552658 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.564586 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.581545 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.592196 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.604270 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.614961 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.624695 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.631586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.631611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.631620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.631632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.631643 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.636552 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.677340 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.697214 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.707627 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.717535 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.733968 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.734016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.734026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.734044 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.734055 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.738353 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.749619 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.761592 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.778106 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.836298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.836363 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.836376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.836413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.836425 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.939509 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.939585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.939625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.939657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:55 crc kubenswrapper[4870]: I0130 08:09:55.939678 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:55Z","lastTransitionTime":"2026-01-30T08:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.042728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.042788 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.042800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.042835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.042848 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.047922 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:58:52.429483404 +0000 UTC Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.145748 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.145836 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.145854 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.145906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.145928 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.248967 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.249052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.249071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.249095 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.249139 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.350912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.350964 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.351012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.351033 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.351048 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.352928 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bdd7f5e-1187-4760-b2dc-98c3d3286f05" containerID="7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1" exitCode=0 Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.353121 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.353274 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerDied","Data":"7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.358266 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.374605 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.392132 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.394053 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.403520 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.427702 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.440410 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.453717 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.453789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.453821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.453834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.453852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.453868 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.465183 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.479061 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.491105 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.507729 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.527667 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.543301 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.556077 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.556113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.556135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.556048 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.556155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.556254 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.568368 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.581182 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.593476 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.604811 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.615521 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.623929 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.639651 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.649841 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.659138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.659178 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.659189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.659205 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.659217 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.659734 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.675857 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.689080 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.703046 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.720514 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.736026 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.753128 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.762222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.762264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.762278 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.762298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.762312 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.767215 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.767347 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:10:12.767323 +0000 UTC m=+51.462870129 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.791973 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.833589 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.865404 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.865443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.865455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.865472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.865485 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.868083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.868124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.868160 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.868182 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868201 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868268 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:12.868248347 +0000 UTC m=+51.563795466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868289 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868308 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868319 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868361 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:12.86834658 +0000 UTC m=+51.563893699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868358 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868401 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868415 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868493 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:12.868472294 +0000 UTC m=+51.564019443 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868422 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:56 crc kubenswrapper[4870]: E0130 08:09:56.868566 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:12.868550846 +0000 UTC m=+51.564098005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.968033 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.968078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.968094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.968110 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:56 crc kubenswrapper[4870]: I0130 08:09:56.968123 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:56Z","lastTransitionTime":"2026-01-30T08:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.048792 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:57:52.909350987 +0000 UTC Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.072667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.072739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.072764 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.072794 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.072817 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.074006 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.074193 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.074806 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.074994 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.075116 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.075253 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.175473 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.175509 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.175520 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.175541 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.175550 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.277186 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.277220 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.277231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.277244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.277252 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.359551 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" event={"ID":"8bdd7f5e-1187-4760-b2dc-98c3d3286f05","Type":"ContainerStarted","Data":"b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.359653 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.373370 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.379516 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.379562 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.379577 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.379593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.379603 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.390553 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.418976 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.434243 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.448018 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.457084 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.470595 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.481454 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.481494 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.481505 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.481519 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.481529 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.482082 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.490257 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.506675 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.519282 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.532498 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.544202 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.547414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.547457 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.547469 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.547486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.547497 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.557353 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.559655 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.562856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.562924 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.562938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.562957 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.562970 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.568982 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.574289 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.578023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.578058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.578066 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.578080 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.578089 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.590135 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.593652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.593672 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.593681 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.593695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.593705 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.606374 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.609270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.609308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.609316 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.609354 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.609364 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.631784 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:57Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:57 crc kubenswrapper[4870]: E0130 08:09:57.631921 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.633614 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.633640 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.633649 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.633662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.633671 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.736619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.736686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.736696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.736710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.736718 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.838648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.838674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.838682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.838695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.838705 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.940871 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.940982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.941012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.941040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:57 crc kubenswrapper[4870]: I0130 08:09:57.941058 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:57Z","lastTransitionTime":"2026-01-30T08:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.043528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.043563 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.043572 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.043586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.043595 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.049034 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:31:50.827947012 +0000 UTC Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.146571 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.146632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.146650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.146673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.146690 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.249985 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.250056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.250075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.250098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.250115 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.353743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.353823 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.353841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.353868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.353953 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.366949 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/0.log" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.371446 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0" exitCode=1 Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.371678 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.372459 4870 scope.go:117] "RemoveContainer" containerID="4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.397292 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.433439 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"d *v1.Pod event handler 3\\\\nI0130 08:09:57.802952 6155 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803083 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:09:57.803225 6155 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803438 6155 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 08:09:57.803479 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 08:09:57.803490 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 08:09:57.803503 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:57.803541 6155 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 08:09:57.803558 6155 factory.go:656] Stopping watch factory\\\\nI0130 08:09:57.803557 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 08:09:57.803574 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:57.803568 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.451979 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.457176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.457234 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.457253 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.457280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.457296 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.467461 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.484333 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.541342 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.561596 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.561852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.561938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.561952 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.561967 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.561979 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.574599 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.590201 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.621197 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.637259 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.652684 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.664245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.664278 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.664286 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.664299 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.664308 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.667380 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.681462 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.695836 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:58Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.767249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.767305 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.767317 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.767337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.767350 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.869705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.869757 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.869770 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.869789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.869801 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.972387 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.972420 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.972430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.972443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:58 crc kubenswrapper[4870]: I0130 08:09:58.972451 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:58Z","lastTransitionTime":"2026-01-30T08:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.049739 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:56:24.089448273 +0000 UTC Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.073613 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.073613 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:09:59 crc kubenswrapper[4870]: E0130 08:09:59.073726 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:09:59 crc kubenswrapper[4870]: E0130 08:09:59.073831 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.073615 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:09:59 crc kubenswrapper[4870]: E0130 08:09:59.074044 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.075038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.075076 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.075087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.075103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.075116 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.177119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.177190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.177203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.177219 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.177231 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.279079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.279113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.279126 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.279147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.279158 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.377958 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/0.log" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.381489 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.381700 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.382253 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.382292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.382308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.382325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.382338 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.401851 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.417290 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.431752 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.444614 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.459023 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.470071 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.484838 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.484916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.484932 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.484966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.484984 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.492221 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.506825 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.526345 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"d *v1.Pod event handler 3\\\\nI0130 08:09:57.802952 6155 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803083 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:09:57.803225 6155 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803438 6155 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 08:09:57.803479 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 08:09:57.803490 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 08:09:57.803503 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:57.803541 6155 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 08:09:57.803558 6155 factory.go:656] Stopping watch factory\\\\nI0130 08:09:57.803557 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 08:09:57.803574 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:57.803568 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.538038 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.549549 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.561333 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.576009 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.587527 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.587584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.587602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.587625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.587642 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.590474 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.601083 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.681350 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq"] Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.682178 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.685476 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.686046 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.689529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.689564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.689577 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.689595 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.689609 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.694170 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.694238 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.694444 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-kube-api-access-fglbm\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.694536 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.698025 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.709197 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.721547 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.742206 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.756099 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.772055 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.783835 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.792764 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.792832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.792856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.792922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.792955 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.795298 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.795357 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.795386 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.795457 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-kube-api-access-fglbm\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.796026 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.796495 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.798728 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.805127 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.812849 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglbm\" (UniqueName: \"kubernetes.io/projected/b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8-kube-api-access-fglbm\") pod \"ovnkube-control-plane-749d76644c-q5xdq\" (UID: \"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.824759 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.840366 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.857916 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.885299 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"d *v1.Pod event handler 3\\\\nI0130 08:09:57.802952 6155 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803083 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:09:57.803225 6155 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803438 6155 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 08:09:57.803479 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 08:09:57.803490 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 08:09:57.803503 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:57.803541 6155 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 08:09:57.803558 6155 factory.go:656] Stopping watch factory\\\\nI0130 08:09:57.803557 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 08:09:57.803574 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:57.803568 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.896123 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.896175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.896195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.896219 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.896237 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.900968 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.921207 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.940607 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.963329 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:09:59Z is after 2025-08-24T17:21:41Z" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.995091 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.998448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.998633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.998754 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.998880 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:09:59 crc kubenswrapper[4870]: I0130 08:09:59.999021 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:09:59Z","lastTransitionTime":"2026-01-30T08:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: W0130 08:10:00.014727 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb66c5f2c_1e0e_4d09_ab12_8cd255f29aa8.slice/crio-6666711ec23bba216eebd7fab9fe80d59045c0400d3fd8b44279e456d40e7f83 WatchSource:0}: Error finding container 6666711ec23bba216eebd7fab9fe80d59045c0400d3fd8b44279e456d40e7f83: Status 404 returned error can't find the container with id 6666711ec23bba216eebd7fab9fe80d59045c0400d3fd8b44279e456d40e7f83 Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.050134 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:11:27.685869281 +0000 UTC Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.101431 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.101477 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.101494 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.101526 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.101549 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.204138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.204196 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.204218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.204247 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.204270 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.306839 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.307115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.307203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.307287 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.307366 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.388556 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/1.log" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.389422 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/0.log" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.392525 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483" exitCode=1 Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.392624 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.392701 4870 scope.go:117] "RemoveContainer" containerID="4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.393930 4870 scope.go:117] "RemoveContainer" containerID="bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483" Jan 30 08:10:00 crc kubenswrapper[4870]: E0130 08:10:00.394227 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.397921 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" event={"ID":"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8","Type":"ContainerStarted","Data":"7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.397970 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" event={"ID":"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8","Type":"ContainerStarted","Data":"ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.397986 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" event={"ID":"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8","Type":"ContainerStarted","Data":"6666711ec23bba216eebd7fab9fe80d59045c0400d3fd8b44279e456d40e7f83"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.408247 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.409196 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.409232 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.409242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.409256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.409265 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.424801 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.440398 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.459793 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.476409 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.493068 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.512137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.512176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.512187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.512205 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.512218 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.512205 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.522610 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.535333 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.555452 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"d *v1.Pod event handler 3\\\\nI0130 08:09:57.802952 6155 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803083 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:09:57.803225 6155 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803438 6155 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 08:09:57.803479 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 08:09:57.803490 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 08:09:57.803503 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:57.803541 6155 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 08:09:57.803558 6155 factory.go:656] Stopping watch factory\\\\nI0130 08:09:57.803557 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 08:09:57.803574 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:57.803568 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.567616 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.581778 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.594169 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.610413 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.614111 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.614146 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.614157 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.614173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.614184 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.621803 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.634343 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.645997 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.657165 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.668088 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.679814 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.689663 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.706858 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.716305 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.716368 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.716390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.716415 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.716432 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.718359 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.735646 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.750420 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.766015 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.785729 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e324fc6bf6711c4736623474f414f0cbb0b17c3c705fd969c8183bb670739c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"message\\\":\\\"d *v1.Pod event handler 3\\\\nI0130 08:09:57.802952 6155 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803083 6155 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:09:57.803225 6155 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 08:09:57.803438 6155 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 08:09:57.803479 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 08:09:57.803490 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 08:09:57.803503 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:57.803541 6155 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 08:09:57.803558 6155 factory.go:656] Stopping watch factory\\\\nI0130 08:09:57.803557 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 08:09:57.803574 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:57.803568 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.795945 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.808048 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.818970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.818948 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.819002 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.819012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.819025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.819034 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.830236 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.839838 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:00Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.921432 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.921466 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.921476 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.921492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:00 crc kubenswrapper[4870]: I0130 08:10:00.921504 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:00Z","lastTransitionTime":"2026-01-30T08:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.024205 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.024239 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.024251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.024266 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.024278 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.051224 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:16:28.878197588 +0000 UTC Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.074573 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.074641 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.074770 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.074797 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.074977 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.075062 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.127236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.127284 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.127297 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.127316 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.127329 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.229318 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.229384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.229394 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.229406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.229415 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.333095 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.333133 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.333142 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.333154 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.333164 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.405295 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/1.log" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.411719 4870 scope.go:117] "RemoveContainer" containerID="bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483" Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.412075 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.436843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.436920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.436943 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.436962 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.436975 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.437771 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.467506 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.482336 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.503812 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.516561 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mp9vw"] Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.517494 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.517626 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.524032 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.538329 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.539152 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.539190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.539202 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.539218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.539231 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.553916 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.570207 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.585467 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.614122 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx29n\" (UniqueName: \"kubernetes.io/projected/7b976744-b72d-4291-a32f-437fc1cfbf03-kube-api-access-rx29n\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.614206 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.624690 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.642128 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.642210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.642233 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.642259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.642277 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.644474 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.662081 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.676270 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.693585 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.706744 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.714651 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.714724 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx29n\" (UniqueName: \"kubernetes.io/projected/7b976744-b72d-4291-a32f-437fc1cfbf03-kube-api-access-rx29n\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.714774 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:01 crc kubenswrapper[4870]: E0130 08:10:01.714830 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:02.214814545 +0000 UTC m=+40.910361664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.721816 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.732579 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx29n\" (UniqueName: \"kubernetes.io/projected/7b976744-b72d-4291-a32f-437fc1cfbf03-kube-api-access-rx29n\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.741665 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.744447 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.744485 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.744496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.744510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.744518 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.754413 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.769315 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.782577 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.797841 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.812559 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.837529 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.846306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.846343 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.846354 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.846370 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.846382 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.856593 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.867996 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.877048 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.887502 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.898801 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.907531 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.922973 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.933743 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.943346 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.948331 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.948361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.948374 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.948391 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.948402 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:01Z","lastTransitionTime":"2026-01-30T08:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:01 crc kubenswrapper[4870]: I0130 08:10:01.956997 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:01Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.050437 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.050482 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.050499 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.050515 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.050526 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.051580 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:21:46.2298584 +0000 UTC Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.092401 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.104735 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.115435 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.128681 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.152557 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.153603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.153650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.153663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.153679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.153695 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.194481 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.217740 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:02 crc kubenswrapper[4870]: E0130 08:10:02.217855 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:02 crc kubenswrapper[4870]: E0130 08:10:02.218049 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:03.217971274 +0000 UTC m=+41.913518383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.230277 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.256360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.256619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.256704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.256793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.256894 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.272365 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.311632 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.352036 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.358963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.359094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.359173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.359280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.359359 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.396383 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.436844 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.461823 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.461874 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.461914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.461932 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.461944 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.471423 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.513608 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.551042 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.564143 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.564177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.564188 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.564203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.564214 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.590403 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.631755 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.667327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.667457 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.667495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.667540 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.667567 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.771130 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.771190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.771211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.771241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.771265 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.874484 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.874562 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.874584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.874608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.874624 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.977916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.977978 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.978005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.978030 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:02 crc kubenswrapper[4870]: I0130 08:10:02.978049 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:02Z","lastTransitionTime":"2026-01-30T08:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.052251 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:42:55.691324766 +0000 UTC Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.073802 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.073831 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:03 crc kubenswrapper[4870]: E0130 08:10:03.073979 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.074105 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.074111 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:03 crc kubenswrapper[4870]: E0130 08:10:03.074296 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:03 crc kubenswrapper[4870]: E0130 08:10:03.074456 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:03 crc kubenswrapper[4870]: E0130 08:10:03.074599 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.081630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.081715 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.081741 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.081774 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.081798 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.184690 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.184750 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.184768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.184792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.184810 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.229758 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:03 crc kubenswrapper[4870]: E0130 08:10:03.229992 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:03 crc kubenswrapper[4870]: E0130 08:10:03.230116 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:05.230083695 +0000 UTC m=+43.925630844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.287455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.287490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.287501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.287516 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.287528 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.390388 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.390439 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.390455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.390475 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.390489 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.492993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.493059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.493077 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.493100 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.493124 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.596459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.596501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.596511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.596526 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.596534 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.699036 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.699084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.699108 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.699126 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.699138 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.803123 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.803172 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.803186 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.803207 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.803219 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.908815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.909092 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.909565 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.909589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:03 crc kubenswrapper[4870]: I0130 08:10:03.909604 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:03Z","lastTransitionTime":"2026-01-30T08:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.013022 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.013072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.013081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.013097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.013107 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.053282 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:08:53.594340843 +0000 UTC Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.116030 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.116065 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.116074 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.116088 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.116098 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.218955 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.219047 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.219062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.219087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.219105 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.323660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.323738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.323752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.323780 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.323793 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.426957 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.427044 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.427071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.427110 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.427136 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.530584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.530641 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.530654 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.530677 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.530692 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.633435 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.633488 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.633505 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.633529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.633563 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.737421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.737491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.737512 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.737541 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.737561 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.841020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.841073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.841087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.841104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.841116 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.944193 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.944271 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.944297 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.944329 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:04 crc kubenswrapper[4870]: I0130 08:10:04.944371 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:04Z","lastTransitionTime":"2026-01-30T08:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.048003 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.048052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.048073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.048093 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.048107 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.053768 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:31:10.29792682 +0000 UTC Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.074518 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.074571 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.074550 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.074616 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:05 crc kubenswrapper[4870]: E0130 08:10:05.074685 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:05 crc kubenswrapper[4870]: E0130 08:10:05.074945 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:05 crc kubenswrapper[4870]: E0130 08:10:05.075154 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:05 crc kubenswrapper[4870]: E0130 08:10:05.075187 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.151324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.151359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.151368 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.151382 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.151391 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.253719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:05 crc kubenswrapper[4870]: E0130 08:10:05.253838 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:05 crc kubenswrapper[4870]: E0130 08:10:05.253903 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:09.253888165 +0000 UTC m=+47.949435274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.254979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.255058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.255078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.255104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.255124 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.357841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.357904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.357918 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.357935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.357949 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.460526 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.460573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.460592 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.460640 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.460659 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.563566 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.563602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.563611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.563624 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.563634 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.667016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.667311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.667350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.667381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.667403 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.770397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.770497 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.770523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.770557 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.770582 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.873387 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.873422 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.873433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.873449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.873460 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.977044 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.977135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.977152 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.977175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:05 crc kubenswrapper[4870]: I0130 08:10:05.977194 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:05Z","lastTransitionTime":"2026-01-30T08:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.006251 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.007661 4870 scope.go:117] "RemoveContainer" containerID="bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483" Jan 30 08:10:06 crc kubenswrapper[4870]: E0130 08:10:06.007980 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.054171 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:20:48.224421431 +0000 UTC Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.079244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.079310 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.079328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.079347 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.079360 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.183588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.183669 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.183688 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.183715 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.183734 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.287175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.287244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.287262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.287285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.287302 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.390588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.390647 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.390663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.390689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.390706 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.493948 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.494038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.494068 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.494102 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.494127 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.596974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.597014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.597025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.597041 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.597054 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.700191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.700236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.700248 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.700298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.700313 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.803916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.803971 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.803982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.804002 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.804016 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.907413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.907474 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.907486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.907508 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:06 crc kubenswrapper[4870]: I0130 08:10:06.907519 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:06Z","lastTransitionTime":"2026-01-30T08:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.010519 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.010580 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.010597 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.010624 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.010643 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.055032 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:20:10.039203289 +0000 UTC Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.073933 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.073960 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.073933 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.074034 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.074174 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.074338 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.074552 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.074612 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.113958 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.114001 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.114011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.114024 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.114033 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.217646 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.217724 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.217744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.217770 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.217789 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.321425 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.321479 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.321494 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.321515 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.321529 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.424221 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.424308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.424332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.424353 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.424370 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.527556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.527620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.527641 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.527668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.527689 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.631039 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.631112 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.631134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.631162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.631184 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.727457 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.727531 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.727561 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.727589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.727609 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.747643 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:07Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.754164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.754198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.754211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.754228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.754239 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.773119 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:07Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.779035 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.779077 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.779088 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.779128 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.779142 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.795240 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:07Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.800262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.800320 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.800344 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.800377 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.800399 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.822656 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:07Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.827975 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.828020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.828037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.828058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.828070 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.848257 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:07Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:07 crc kubenswrapper[4870]: E0130 08:10:07.848399 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.850803 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.850857 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.850920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.850955 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.850980 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.954400 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.954496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.954514 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.954540 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:07 crc kubenswrapper[4870]: I0130 08:10:07.954561 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:07Z","lastTransitionTime":"2026-01-30T08:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.055669 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:36:59.685769235 +0000 UTC Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.058091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.058158 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.058179 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.058204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.058222 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.161710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.161766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.161783 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.161806 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.161824 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.264856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.264979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.264999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.265026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.265045 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.368220 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.368283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.368298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.368312 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.368320 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.471977 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.472431 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.472521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.472615 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.472740 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.575215 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.575262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.575272 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.575290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.575300 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.678228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.678291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.678308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.678332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.678350 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.781836 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.781907 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.781922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.781940 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.781952 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.884978 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.885015 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.885025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.885040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.885052 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.987433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.987465 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.987475 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.987487 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:08 crc kubenswrapper[4870]: I0130 08:10:08.987495 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:08Z","lastTransitionTime":"2026-01-30T08:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.056563 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:37:09.530940519 +0000 UTC Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.074511 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.074552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.074660 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:09 crc kubenswrapper[4870]: E0130 08:10:09.074748 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.074779 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:09 crc kubenswrapper[4870]: E0130 08:10:09.074912 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:09 crc kubenswrapper[4870]: E0130 08:10:09.075239 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:09 crc kubenswrapper[4870]: E0130 08:10:09.075162 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.090380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.090675 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.090820 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.091015 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.091162 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.193684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.193738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.193751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.193768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.193780 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.296690 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.296728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.296743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.296763 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.296778 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.297608 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:09 crc kubenswrapper[4870]: E0130 08:10:09.297716 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:09 crc kubenswrapper[4870]: E0130 08:10:09.297778 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:17.297760287 +0000 UTC m=+55.993307406 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.399492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.399535 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.399545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.399562 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.399575 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.502204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.502326 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.502345 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.502373 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.502391 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.605213 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.605283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.605306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.605335 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.605357 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.708945 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.709012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.709037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.709069 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.709091 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.812359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.812434 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.812455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.812498 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.812522 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.915378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.915453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.915474 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.915504 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:09 crc kubenswrapper[4870]: I0130 08:10:09.915527 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:09Z","lastTransitionTime":"2026-01-30T08:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.018755 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.018978 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.019009 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.019042 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.019059 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.057136 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:40:23.881853209 +0000 UTC Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.121778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.121840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.121859 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.121919 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.121963 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.225230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.225269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.225280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.225295 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.225307 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.328982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.329062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.329104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.329136 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.329156 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.431670 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.431729 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.431751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.431779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.431799 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.534318 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.534379 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.534396 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.534417 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.534435 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.637398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.637428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.637438 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.637453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.637462 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.739912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.739952 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.739961 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.739974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.739984 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.844068 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.844165 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.844183 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.844208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.844227 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.947459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.947563 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.947581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.947604 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:10 crc kubenswrapper[4870]: I0130 08:10:10.947621 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:10Z","lastTransitionTime":"2026-01-30T08:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.050717 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.050848 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.050939 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.050979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.051002 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.057928 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:08:59.431165673 +0000 UTC Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.073795 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.073978 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:11 crc kubenswrapper[4870]: E0130 08:10:11.074118 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.074183 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.074264 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:11 crc kubenswrapper[4870]: E0130 08:10:11.074428 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:11 crc kubenswrapper[4870]: E0130 08:10:11.074570 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:11 crc kubenswrapper[4870]: E0130 08:10:11.074735 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.155074 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.155132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.155144 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.155163 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.155177 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.258723 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.258776 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.258784 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.258799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.258808 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.362806 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.362868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.362935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.362960 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.362976 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.466390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.466428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.466437 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.466451 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.466465 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.569391 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.569464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.569487 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.569515 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.569535 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.673230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.673287 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.673304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.673328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.673344 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.775999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.776050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.776066 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.776087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.776106 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.879700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.879781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.879802 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.879826 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.879842 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.983283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.983546 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.983610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.983700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:11 crc kubenswrapper[4870]: I0130 08:10:11.983785 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:11Z","lastTransitionTime":"2026-01-30T08:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.058732 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:04:54.660144608 +0000 UTC Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.086852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.087019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.087046 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.087069 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.087131 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.094615 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.122556 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.139076 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.156534 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.180821 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.190316 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.190384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.190402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.190429 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.190447 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.198441 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.220482 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.243194 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.260207 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.278312 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.293645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.293692 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.293702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.293719 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.293728 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.295354 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.310336 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.342790 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.364281 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.380195 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.393029 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.396102 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.396244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.396261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.396279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.396290 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.409143 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.499200 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.499235 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.499246 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.499261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.499274 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.601629 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.601700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.601724 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.601755 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.601777 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.704337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.704403 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.704419 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.704442 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.704463 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.807573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.807649 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.807675 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.807705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.807728 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.841227 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.841389 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:10:44.841359106 +0000 UTC m=+83.536906255 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.910631 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.910673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.910684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.910699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.910710 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:12Z","lastTransitionTime":"2026-01-30T08:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.942527 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.942620 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.942717 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.942810 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.942853 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.942913 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.942832 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:44.942803615 +0000 UTC m=+83.638350764 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.942737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943020 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:44.942990701 +0000 UTC m=+83.638537850 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:12 crc kubenswrapper[4870]: I0130 08:10:12.943058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943085 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943179 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943197 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943210 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943273 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:44.943254159 +0000 UTC m=+83.638801268 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:12 crc kubenswrapper[4870]: E0130 08:10:12.943312 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:44.94328393 +0000 UTC m=+83.638831099 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.013654 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.013713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.013731 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.013763 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.013781 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.060351 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:30:08.99239544 +0000 UTC Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.074287 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.074344 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.074395 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.074325 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:13 crc kubenswrapper[4870]: E0130 08:10:13.074478 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:13 crc kubenswrapper[4870]: E0130 08:10:13.074608 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:13 crc kubenswrapper[4870]: E0130 08:10:13.074730 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:13 crc kubenswrapper[4870]: E0130 08:10:13.074944 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.116687 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.116760 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.116778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.116805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.116823 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.220414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.220481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.220499 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.220522 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.220542 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.323725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.323781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.323801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.323830 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.323850 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.426942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.427019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.427046 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.427119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.427144 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.529663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.529724 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.529746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.529773 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.529795 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.633413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.633471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.633489 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.633514 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.633533 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.724864 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.739403 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.739660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.739710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.739724 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.739746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.739759 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.747815 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.771776 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.792445 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.810196 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.825591 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.843412 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.845223 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.845264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.845281 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.845304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.845321 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.870376 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.888581 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.902933 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.913290 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.926517 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.940799 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.948855 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.948926 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.948939 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.948960 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.948973 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:13Z","lastTransitionTime":"2026-01-30T08:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.952158 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.982576 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:13 crc kubenswrapper[4870]: I0130 08:10:13.998031 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:13Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.014337 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:14Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.037337 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:14Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.051401 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.051460 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.051483 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.051514 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.051538 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.060928 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:06:10.605915443 +0000 UTC Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.154819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.154868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.154899 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.154920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.154931 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.258686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.258744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.258760 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.258784 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.258803 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.361660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.361709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.361728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.361755 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.361773 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.464109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.464165 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.464182 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.464205 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.464222 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.566697 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.566767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.566789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.566819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.566841 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.669434 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.669477 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.669487 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.669503 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.669516 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.771951 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.772011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.772029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.772056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.772074 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.874953 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.874999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.875014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.875034 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.875050 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.977405 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.977438 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.977448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.977461 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:14 crc kubenswrapper[4870]: I0130 08:10:14.977473 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:14Z","lastTransitionTime":"2026-01-30T08:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.061406 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:02:09.733681203 +0000 UTC Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.074125 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.074191 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:15 crc kubenswrapper[4870]: E0130 08:10:15.074238 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.074131 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.074134 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:15 crc kubenswrapper[4870]: E0130 08:10:15.074312 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:15 crc kubenswrapper[4870]: E0130 08:10:15.074503 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:15 crc kubenswrapper[4870]: E0130 08:10:15.074564 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.081079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.081130 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.081147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.081166 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.081182 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.184258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.184319 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.184338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.184363 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.184383 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.286781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.286821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.286831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.286848 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.286862 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.389948 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.390023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.390045 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.390077 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.390095 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.493139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.493183 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.493201 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.493222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.493239 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.599184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.599461 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.599549 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.599656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.599756 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.703027 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.703096 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.703119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.703147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.703165 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.805582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.805817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.805964 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.806034 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.806097 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.908610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.908687 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.908705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.908724 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:15 crc kubenswrapper[4870]: I0130 08:10:15.908738 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:15Z","lastTransitionTime":"2026-01-30T08:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.012029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.012093 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.012111 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.012137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.012156 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.061990 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:06:39.153869656 +0000 UTC Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.116031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.116097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.116115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.116139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.116196 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.219597 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.219657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.219671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.219698 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.219713 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.323345 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.323747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.323935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.324120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.324278 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.426497 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.426542 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.426554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.426574 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.426586 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.531395 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.531472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.531510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.531543 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.531566 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.634459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.634603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.634633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.634656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.634674 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.738187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.738276 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.738309 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.738338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.738360 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.841632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.841700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.841713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.841730 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.841742 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.946417 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.946452 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.946462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.946480 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:16 crc kubenswrapper[4870]: I0130 08:10:16.946491 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:16Z","lastTransitionTime":"2026-01-30T08:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.049367 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.049424 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.049443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.049471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.049488 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.062779 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:11:55.083827739 +0000 UTC Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.074332 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.074361 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.074430 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.074430 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.074478 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.074648 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.074678 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.074860 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.152689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.152767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.152785 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.152815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.152834 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.255803 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.255856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.255872 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.255919 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.255934 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.358490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.358725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.358740 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.358761 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.358775 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.391526 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.391795 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.391939 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:10:33.391867834 +0000 UTC m=+72.087414983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.462208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.462518 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.462811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.463254 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.463492 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.566075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.566195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.566223 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.566256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.566278 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.669604 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.669649 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.669661 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.669680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.669691 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.772097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.772125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.772133 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.772145 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.772153 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.874491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.874548 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.874561 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.874581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.874595 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.961915 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.961951 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.961962 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.961978 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.961988 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:17 crc kubenswrapper[4870]: E0130 08:10:17.982132 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.992190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.992273 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.992294 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.992321 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:17 crc kubenswrapper[4870]: I0130 08:10:17.992348 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:17Z","lastTransitionTime":"2026-01-30T08:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: E0130 08:10:18.013386 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.017565 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.017632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.017653 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.017682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.017708 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: E0130 08:10:18.038245 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.043098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.043151 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.043171 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.043195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.043212 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: E0130 08:10:18.059202 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.063437 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:46:51.111240993 +0000 UTC Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.064660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.064726 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.064743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.064768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.064786 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.075331 4870 scope.go:117] "RemoveContainer" containerID="bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483" Jan 30 08:10:18 crc kubenswrapper[4870]: E0130 08:10:18.087155 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: E0130 08:10:18.087759 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.090256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.090297 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.090314 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.090337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.090355 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.193571 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.193631 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.193653 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.193682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.193704 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.297107 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.297608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.297627 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.297652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.297668 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.401544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.401579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.401591 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.401611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.401625 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.476561 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/1.log" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.481680 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.484016 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.499902 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.504935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.504979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.504998 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.505020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.505036 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.517869 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.536330 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.554285 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.574811 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.598038 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.607730 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.607778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.607790 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.607805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.607816 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.618348 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.645226 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.658680 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.672147 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.682658 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.709899 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.709980 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.709993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.710011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.710032 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.711791 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.733758 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.753219 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.785224 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.812508 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.826929 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.836146 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.839835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.839906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.839920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.839940 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.839951 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.943162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.943206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.943218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.943234 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:18 crc kubenswrapper[4870]: I0130 08:10:18.943245 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:18Z","lastTransitionTime":"2026-01-30T08:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.046052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.046127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.046148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.046176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.046199 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.064386 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:19:48.76073837 +0000 UTC Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.074001 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.074051 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.074071 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.074015 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:19 crc kubenswrapper[4870]: E0130 08:10:19.074183 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:19 crc kubenswrapper[4870]: E0130 08:10:19.074246 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:19 crc kubenswrapper[4870]: E0130 08:10:19.074320 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:19 crc kubenswrapper[4870]: E0130 08:10:19.074409 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.149231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.149269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.149279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.149293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.149302 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.251819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.251870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.251914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.251938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.251956 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.354869 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.355000 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.355020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.355047 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.355065 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.457781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.457834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.457852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.457919 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.457940 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.489190 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/2.log" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.490386 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/1.log" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.495359 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d" exitCode=1 Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.495429 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.495530 4870 scope.go:117] "RemoveContainer" containerID="bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.497035 4870 scope.go:117] "RemoveContainer" containerID="1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d" Jan 30 08:10:19 crc kubenswrapper[4870]: E0130 08:10:19.497393 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.518275 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.535348 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.552327 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.561057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.561106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.561124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.561148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.561167 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.566814 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.580919 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.592505 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.602763 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.625856 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.639893 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.650522 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.663231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.663259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.663269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.663281 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.663291 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.671123 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.698770 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfadb4b11c7243c5de202188c1e62ac35d26e29e9c6a1c53063369253c1e2483\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"message\\\":\\\"ancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0130 08:09:59.417960 6308 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 2.774229ms\\\\nI0130 08:09:59.418058 6308 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418156 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 08:09:59.418188 6308 factory.go:656] Stopping watch factory\\\\nI0130 08:09:59.418151 6308 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}\\\\nI0130 08:09:59.418201 6308 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 08:09:59.418201 6308 services_controller.go:360] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api for network=default : 3.106099ms\\\\nI0130 08:09:59.418213 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 08:09:59.418248 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0130 08:09:59.418286 6308 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 08:09:59.418396 6308 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.710346 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.728750 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.746000 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.760042 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.765271 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.765313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.765323 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.765340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.765355 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.780630 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.799188 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.867793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.867868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.867901 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.867919 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.867931 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.971292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.971331 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.971340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.971354 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:19 crc kubenswrapper[4870]: I0130 08:10:19.971363 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:19Z","lastTransitionTime":"2026-01-30T08:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.064547 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:37:40.559730887 +0000 UTC Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.073574 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.073612 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.073621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.073635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.073643 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.176671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.176725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.176744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.176768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.176786 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.280410 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.280473 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.280490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.280513 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.280531 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.383512 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.383556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.383567 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.383583 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.383596 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.486673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.486718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.486729 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.486744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.486756 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.501193 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/2.log" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.505190 4870 scope.go:117] "RemoveContainer" containerID="1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d" Jan 30 08:10:20 crc kubenswrapper[4870]: E0130 08:10:20.505382 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.519902 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.537627 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.553316 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.570130 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.588744 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.590868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.590955 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.590972 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.590997 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.591016 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.605017 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.638515 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.658703 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.675042 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.693783 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.693841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.693857 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.693906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.693925 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.701658 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.729658 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.745212 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.762868 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.783190 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.796727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.796777 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.796792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.796811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.796824 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.798477 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.816699 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.832105 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.846390 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.900010 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.900063 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.900075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.900092 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:20 crc kubenswrapper[4870]: I0130 08:10:20.900106 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:20Z","lastTransitionTime":"2026-01-30T08:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.003268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.003303 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.003312 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.003328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.003340 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.064741 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:02:34.005110745 +0000 UTC Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.074394 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.074510 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:21 crc kubenswrapper[4870]: E0130 08:10:21.074683 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.074712 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.074730 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:21 crc kubenswrapper[4870]: E0130 08:10:21.074828 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:21 crc kubenswrapper[4870]: E0130 08:10:21.075017 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:21 crc kubenswrapper[4870]: E0130 08:10:21.075111 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.106630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.106696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.106720 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.106749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.106772 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.209600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.209671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.209695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.209728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.209754 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.313166 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.313232 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.313256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.313287 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.313311 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.416280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.416328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.416338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.416353 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.416364 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.519029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.519139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.519160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.519182 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.519241 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.622203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.622265 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.622283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.622312 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.622334 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.725225 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.725268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.725277 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.725292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.725301 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.827963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.828004 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.828015 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.828029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.828041 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.931191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.931268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.931290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.931320 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:21 crc kubenswrapper[4870]: I0130 08:10:21.931342 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:21Z","lastTransitionTime":"2026-01-30T08:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.035207 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.035267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.035282 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.035304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.035323 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.065444 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:14:16.569602601 +0000 UTC Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.098262 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.130205 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.138076 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.138137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.138156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.138181 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.138198 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.145777 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.163147 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.177110 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.195184 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.210669 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.231019 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.245174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.245235 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.245255 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.245293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.245314 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.251077 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.270831 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.289973 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.310533 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.324507 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.346436 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.348022 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.348083 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.348101 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.348149 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.348168 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.361848 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.384701 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.399098 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.411999 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.452374 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.452448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.452467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.452492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.452510 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.555461 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.555554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.555573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.555955 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.555976 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.659185 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.659247 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.659316 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.659347 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.659368 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.762538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.762600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.762617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.762639 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.762658 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.866392 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.866455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.866478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.866510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.866532 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.969923 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.969984 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.970007 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.970035 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:22 crc kubenswrapper[4870]: I0130 08:10:22.970057 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:22Z","lastTransitionTime":"2026-01-30T08:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.066273 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:16:42.085473252 +0000 UTC Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.072705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.072764 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.072780 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.072805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.072823 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.073625 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.073693 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.073709 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.073631 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:23 crc kubenswrapper[4870]: E0130 08:10:23.073787 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:23 crc kubenswrapper[4870]: E0130 08:10:23.073976 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:23 crc kubenswrapper[4870]: E0130 08:10:23.074117 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:23 crc kubenswrapper[4870]: E0130 08:10:23.074214 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.175335 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.175404 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.175421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.175491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.175518 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.279751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.279829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.279863 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.279949 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.279974 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.383512 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.383574 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.383590 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.383617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.383633 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.487209 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.487282 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.487306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.487339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.487361 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.590696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.590737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.590749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.590766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.590778 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.693526 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.693570 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.693580 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.693596 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.693606 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.797523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.797556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.797564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.797579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.797589 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.900657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.900720 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.900732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.900751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:23 crc kubenswrapper[4870]: I0130 08:10:23.900763 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:23Z","lastTransitionTime":"2026-01-30T08:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.003632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.003675 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.003686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.003704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.003715 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.067186 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:06:27.472271315 +0000 UTC Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.106176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.106236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.106246 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.106261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.106272 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.209401 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.209456 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.209467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.209486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.209497 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.311655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.311706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.311716 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.311732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.311741 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.414940 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.414990 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.415005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.415026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.415039 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.517787 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.517851 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.517871 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.517924 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.517942 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.621109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.621191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.621210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.621236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.621255 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.724812 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.724942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.724963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.724990 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.725007 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.828525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.828579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.828597 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.828618 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.828634 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.932445 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.932520 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.932534 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.932555 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:24 crc kubenswrapper[4870]: I0130 08:10:24.932569 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:24Z","lastTransitionTime":"2026-01-30T08:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.035302 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.035336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.035347 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.035362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.035372 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.068058 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:12:08.066775524 +0000 UTC Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.073606 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:25 crc kubenswrapper[4870]: E0130 08:10:25.073764 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.074067 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:25 crc kubenswrapper[4870]: E0130 08:10:25.074178 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.074389 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:25 crc kubenswrapper[4870]: E0130 08:10:25.074511 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.075025 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:25 crc kubenswrapper[4870]: E0130 08:10:25.075132 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.139115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.139174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.139215 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.139249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.139271 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.242826 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.242904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.242922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.242948 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.242970 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.346427 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.346489 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.346504 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.346535 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.346553 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.449911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.449973 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.449994 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.450019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.450037 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.554150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.554206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.554217 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.554240 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.554258 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.657019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.657073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.657084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.657103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.657119 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.760518 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.760581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.760592 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.760615 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.760630 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.864062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.864187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.864206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.864230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.864250 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.966829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.966962 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.966999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.967030 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:25 crc kubenswrapper[4870]: I0130 08:10:25.967047 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:25Z","lastTransitionTime":"2026-01-30T08:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.068195 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:12:24.777881352 +0000 UTC Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.071126 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.071174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.071189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.071211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.071227 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.174245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.174290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.174303 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.174324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.174336 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.277431 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.277478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.277490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.277507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.277518 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.380609 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.380666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.380678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.380698 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.380711 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.483531 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.483577 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.483591 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.483608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.483622 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.586916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.586971 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.586982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.587002 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.587015 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.689641 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.689691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.689703 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.689721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.689734 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.792371 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.792550 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.792645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.792679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.792706 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.895021 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.895059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.895070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.895086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.895098 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.997433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.997483 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.997500 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.997520 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:26 crc kubenswrapper[4870]: I0130 08:10:26.997535 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:26Z","lastTransitionTime":"2026-01-30T08:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.069040 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:36:16.227977342 +0000 UTC Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.074421 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.074460 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.074495 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.074601 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:27 crc kubenswrapper[4870]: E0130 08:10:27.074753 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:27 crc kubenswrapper[4870]: E0130 08:10:27.074903 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:27 crc kubenswrapper[4870]: E0130 08:10:27.075014 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:27 crc kubenswrapper[4870]: E0130 08:10:27.075141 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.100448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.100513 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.100525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.100545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.100560 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.202571 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.202606 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.202618 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.202632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.202645 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.305067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.305094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.305105 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.305120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.305131 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.407281 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.407313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.407324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.407340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.407351 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.509635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.509680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.509690 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.509710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.509725 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.612483 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.612545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.612565 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.612593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.612615 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.714968 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.715006 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.715016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.715031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.715042 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.817034 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.817060 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.817198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.817224 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.817237 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.919749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.919790 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.919800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.919817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:27 crc kubenswrapper[4870]: I0130 08:10:27.919827 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:27Z","lastTransitionTime":"2026-01-30T08:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.022399 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.022443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.022451 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.022467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.022477 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.069385 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:54:17.093810516 +0000 UTC Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.125912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.125955 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.125966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.125984 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.125996 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.227713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.228008 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.228097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.228173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.228251 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.331385 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.331449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.331465 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.331488 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.331506 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.434473 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.434560 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.434573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.434589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.434616 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.476960 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.477017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.477037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.477065 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.477081 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: E0130 08:10:28.492037 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.496699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.496739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.496769 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.496792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.496806 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: E0130 08:10:28.511191 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.515418 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.515471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.515486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.515508 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.515523 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: E0130 08:10:28.527302 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.531744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.531779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.531792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.531814 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.531829 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: E0130 08:10:28.551404 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.555181 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.555234 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.555247 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.555269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.555284 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: E0130 08:10:28.567449 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:28 crc kubenswrapper[4870]: E0130 08:10:28.567684 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.569637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.569698 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.569711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.569733 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.569747 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.672991 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.673048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.673058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.673078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.673090 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.776014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.776062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.776072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.776093 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.776114 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.879984 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.880023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.880032 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.880049 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.880059 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.982831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.982928 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.982945 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.982974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:28 crc kubenswrapper[4870]: I0130 08:10:28.982993 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:28Z","lastTransitionTime":"2026-01-30T08:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.070564 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:02:15.724052842 +0000 UTC Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.073987 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.074048 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.074048 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:29 crc kubenswrapper[4870]: E0130 08:10:29.074136 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:29 crc kubenswrapper[4870]: E0130 08:10:29.074272 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.074297 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:29 crc kubenswrapper[4870]: E0130 08:10:29.074366 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:29 crc kubenswrapper[4870]: E0130 08:10:29.074418 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.085368 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.085399 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.085411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.085427 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.085439 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.187586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.187650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.187659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.187684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.187698 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.290850 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.290902 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.290911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.290929 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.290942 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.392999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.393037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.393048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.393066 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.393079 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.495762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.495819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.495830 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.495849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.495861 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.598544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.598584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.598593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.598611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.598621 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.701174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.701221 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.701230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.701245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.701257 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.804083 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.804179 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.804203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.804228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.804246 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.906844 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.906908 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.906922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.906944 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:29 crc kubenswrapper[4870]: I0130 08:10:29.906959 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:29Z","lastTransitionTime":"2026-01-30T08:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.010398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.010463 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.010476 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.010500 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.010513 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.071024 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:52:27.537411847 +0000 UTC Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.113365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.113397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.113408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.113423 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.113434 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.216039 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.216090 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.216106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.216132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.216144 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.318740 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.318767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.318775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.318787 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.318796 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.421150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.421195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.421204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.421218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.421226 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.524376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.524430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.524444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.524467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.524481 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.626551 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.626601 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.626614 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.626632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.626660 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.729493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.729539 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.729550 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.729567 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.729579 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.832265 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.832356 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.832372 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.832395 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.832410 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.935030 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.935074 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.935085 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.935101 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:30 crc kubenswrapper[4870]: I0130 08:10:30.935112 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:30Z","lastTransitionTime":"2026-01-30T08:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.037636 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.037673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.037683 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.037698 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.037712 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.071773 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:16:48.587793481 +0000 UTC Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.074033 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.074051 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.074065 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:31 crc kubenswrapper[4870]: E0130 08:10:31.074176 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.074200 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:31 crc kubenswrapper[4870]: E0130 08:10:31.074330 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:31 crc kubenswrapper[4870]: E0130 08:10:31.074383 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:31 crc kubenswrapper[4870]: E0130 08:10:31.074438 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.140325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.140387 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.140398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.140419 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.140433 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.243482 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.243521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.243529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.243544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.243558 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.348295 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.348355 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.348406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.348432 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.348446 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.451831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.451906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.451917 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.451970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.451988 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.554852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.554927 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.554938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.554961 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.554977 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.657645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.657711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.657725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.657749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.657765 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.760914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.760966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.760975 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.760992 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.761003 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.863774 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.863819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.863838 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.863861 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.863892 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.966688 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.966753 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.966763 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.966784 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:31 crc kubenswrapper[4870]: I0130 08:10:31.966796 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:31Z","lastTransitionTime":"2026-01-30T08:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.069346 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.069404 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.069417 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.069437 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.069447 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.072737 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:27:59.136390953 +0000 UTC Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.074812 4870 scope.go:117] "RemoveContainer" containerID="1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d" Jan 30 08:10:32 crc kubenswrapper[4870]: E0130 08:10:32.075009 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.096159 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.115822 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.141816 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.156802 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.170846 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.170900 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.170915 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.170934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.170948 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.173317 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.188196 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.201997 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.217106 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.231063 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.242465 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.255240 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.270968 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.273158 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.273217 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.273230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.273251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.273271 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.285720 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.301303 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.319934 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.334758 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.354210 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.368742 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:32Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.375316 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.375356 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.375367 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.375383 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.375394 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.478501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.478559 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.478573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.478594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.478626 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.581661 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.581713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.581723 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.581743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.581756 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.684975 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.685024 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.685040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.685064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.685082 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.787414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.787458 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.787470 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.787486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.787498 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.889454 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.889486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.889495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.889510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.889520 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.991656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.991695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.991704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.991718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:32 crc kubenswrapper[4870]: I0130 08:10:32.991728 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:32Z","lastTransitionTime":"2026-01-30T08:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.073712 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:50:40.573676243 +0000 UTC Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.073954 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.073950 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.074055 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:33 crc kubenswrapper[4870]: E0130 08:10:33.074175 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.074394 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:33 crc kubenswrapper[4870]: E0130 08:10:33.074466 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:33 crc kubenswrapper[4870]: E0130 08:10:33.074630 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:33 crc kubenswrapper[4870]: E0130 08:10:33.074781 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.094251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.094282 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.094291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.094303 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.094312 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.197007 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.197072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.197087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.197109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.197127 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.299306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.299339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.299348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.299361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.299370 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.402537 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.402750 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.402843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.402942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.403016 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.471284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:33 crc kubenswrapper[4870]: E0130 08:10:33.471511 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:33 crc kubenswrapper[4870]: E0130 08:10:33.471576 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:11:05.471553758 +0000 UTC m=+104.167100877 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.506894 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.506954 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.506964 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.506989 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.507001 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.610050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.610330 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.610402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.610466 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.610522 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.713045 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.713080 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.713089 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.713104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.713115 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.816389 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.816430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.816443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.816459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.816471 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.918676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.918948 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.919029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.919146 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:33 crc kubenswrapper[4870]: I0130 08:10:33.919239 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:33Z","lastTransitionTime":"2026-01-30T08:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.021185 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.021227 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.021238 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.021258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.021268 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.074481 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:33:45.734271364 +0000 UTC Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.124204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.124275 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.124293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.124318 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.124335 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.226950 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.227019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.227036 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.227062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.227079 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.329430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.329792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.329921 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.330041 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.330130 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.433236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.433289 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.433302 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.433323 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.433335 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.536067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.536377 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.536451 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.536512 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.536593 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.639911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.639976 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.639989 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.640014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.640030 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.742273 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.742484 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.742571 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.742637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.742706 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.845937 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.845989 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.845999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.846020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.846057 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.948378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.948426 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.948444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.948464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:34 crc kubenswrapper[4870]: I0130 08:10:34.948479 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:34Z","lastTransitionTime":"2026-01-30T08:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.052195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.052258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.052270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.052294 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.052309 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.074679 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.074746 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.074701 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:10:57.674195595 +0000 UTC Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.074895 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:35 crc kubenswrapper[4870]: E0130 08:10:35.074997 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.075142 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:35 crc kubenswrapper[4870]: E0130 08:10:35.075270 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:35 crc kubenswrapper[4870]: E0130 08:10:35.075412 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:35 crc kubenswrapper[4870]: E0130 08:10:35.075708 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.156255 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.156311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.156327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.156354 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.156372 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.260012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.260081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.260100 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.260132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.260152 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.362727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.362798 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.362821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.362849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.362912 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.466389 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.466495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.466511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.466535 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.466550 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.552019 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/0.log" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.552095 4870 generic.go:334] "Generic (PLEG): container finished" podID="3e8e9e25-2b9b-4820-8282-48e1d930a721" containerID="f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a" exitCode=1 Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.552149 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerDied","Data":"f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.552822 4870 scope.go:117] "RemoveContainer" containerID="f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.569923 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.569955 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.569965 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.569982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.569996 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.583335 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.603065 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.619033 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.634632 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.651629 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"2026-01-30T08:09:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431\\\\n2026-01-30T08:09:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431 to /host/opt/cni/bin/\\\\n2026-01-30T08:09:50Z [verbose] multus-daemon started\\\\n2026-01-30T08:09:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:10:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.664451 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.676538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.676585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.676594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.676611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.676622 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.677808 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.702518 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.717743 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.729351 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.746324 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.761774 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.776189 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.779221 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.779252 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.779262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.779279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.779290 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.792353 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.807828 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.824316 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.838643 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.857973 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.881669 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.881710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.881725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.881742 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.881755 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.983598 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.983628 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.983635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.983648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:35 crc kubenswrapper[4870]: I0130 08:10:35.983657 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:35Z","lastTransitionTime":"2026-01-30T08:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.076124 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:26:41.724408914 +0000 UTC Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.085378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.085402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.085411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.085424 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.085435 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.188369 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.188424 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.188433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.188453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.188464 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.290963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.290995 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.291005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.291021 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.291031 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.393648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.393706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.393716 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.393737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.393752 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.496253 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.496324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.496344 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.496366 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.496384 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.558589 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/0.log" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.558686 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerStarted","Data":"e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.577275 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.589612 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.600912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.600950 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.600959 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.600974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.600984 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.601231 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.611712 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.631742 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.645150 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.655643 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.663423 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.673502 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"2026-01-30T08:09:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431\\\\n2026-01-30T08:09:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431 to /host/opt/cni/bin/\\\\n2026-01-30T08:09:50Z [verbose] multus-daemon started\\\\n2026-01-30T08:09:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:10:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.689053 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.704127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.704171 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.704181 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.704197 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.704209 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.722016 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.737368 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.751005 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.765138 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.780952 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.796187 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.807336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.807385 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.807395 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.807414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.807429 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.812747 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.828524 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.910861 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.910928 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.910942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.910962 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:36 crc kubenswrapper[4870]: I0130 08:10:36.910975 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:36Z","lastTransitionTime":"2026-01-30T08:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.013184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.013246 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.013261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.013283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.013296 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.073832 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.073907 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.073931 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.074009 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:37 crc kubenswrapper[4870]: E0130 08:10:37.074074 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:37 crc kubenswrapper[4870]: E0130 08:10:37.074181 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:37 crc kubenswrapper[4870]: E0130 08:10:37.074324 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:37 crc kubenswrapper[4870]: E0130 08:10:37.074452 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.077083 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:45:14.885537041 +0000 UTC Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.115606 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.115658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.115674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.115694 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.115711 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.218195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.218237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.218245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.218259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.218269 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.320253 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.320295 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.320306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.320321 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.320332 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.423544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.423584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.423596 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.423616 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.423630 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.526089 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.526144 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.526155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.526176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.526192 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.628595 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.628635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.628644 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.628671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.628682 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.732207 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.732259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.732270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.732291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.732305 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.835408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.835459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.835476 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.835502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.835519 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.938702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.938737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.938748 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.938762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:37 crc kubenswrapper[4870]: I0130 08:10:37.938775 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:37Z","lastTransitionTime":"2026-01-30T08:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.040826 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.040864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.040899 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.040916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.040928 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.077397 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:37:50.40981678 +0000 UTC Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.143158 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.143209 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.143226 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.143247 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.143264 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.245638 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.245670 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.245679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.245695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.245706 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.347547 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.347659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.347678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.347702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.347720 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.450419 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.450471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.450490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.450515 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.450538 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.554483 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.554523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.554531 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.554545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.554553 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.657863 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.657911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.657921 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.657936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.657946 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.759936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.759965 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.759975 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.759988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.759998 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.823738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.823806 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.823829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.823925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.823954 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: E0130 08:10:38.839352 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:38Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.844048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.844104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.844127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.844155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.844177 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: E0130 08:10:38.864308 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:38Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.869303 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.869352 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.869369 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.869393 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.869410 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: E0130 08:10:38.883626 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:38Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.888216 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.888259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.888270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.888288 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.888300 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: E0130 08:10:38.905870 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:38Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.911037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.911097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.911121 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.911147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.911167 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:38 crc kubenswrapper[4870]: E0130 08:10:38.925382 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"42bb4058-de5f-47d3-b90e-bda57dd064e9\\\",\\\"systemUUID\\\":\\\"7dbac932-0e54-4045-a1f0-fa334c8e1b7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:38Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:38 crc kubenswrapper[4870]: E0130 08:10:38.925529 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.927158 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.927189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.927198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.927213 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:38 crc kubenswrapper[4870]: I0130 08:10:38.927226 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:38Z","lastTransitionTime":"2026-01-30T08:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.030448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.030522 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.030544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.030573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.030599 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.074551 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.074610 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.074549 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.074696 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:39 crc kubenswrapper[4870]: E0130 08:10:39.074720 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:39 crc kubenswrapper[4870]: E0130 08:10:39.074856 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:39 crc kubenswrapper[4870]: E0130 08:10:39.075077 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:39 crc kubenswrapper[4870]: E0130 08:10:39.075209 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.077834 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:27:35.347328825 +0000 UTC Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.133610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.133660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.133679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.133705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.133724 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.236275 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.236338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.236361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.236392 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.236418 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.338559 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.338603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.338613 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.338627 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.338638 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.440920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.440944 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.440953 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.440966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.440974 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.544519 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.544595 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.544618 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.544650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.544675 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.646857 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.646934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.646952 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.646976 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.646990 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.749051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.749102 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.749122 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.749144 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.749161 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.851304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.851340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.851348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.851362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.851372 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.954152 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.954207 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.954225 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.954247 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:39 crc kubenswrapper[4870]: I0130 08:10:39.954264 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:39Z","lastTransitionTime":"2026-01-30T08:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.057595 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.057667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.057705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.057731 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.057750 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.078812 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:45:51.814333235 +0000 UTC Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.160909 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.160969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.160989 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.161011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.161028 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.264305 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.264419 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.264436 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.264452 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.264465 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.367362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.367424 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.367441 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.367462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.367477 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.470984 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.471044 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.471062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.471085 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.471101 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.573462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.573532 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.573545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.573562 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.573594 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.677582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.677623 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.677632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.677646 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.677656 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.780752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.780793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.780805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.780823 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.780835 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.884257 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.884299 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.884307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.884322 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.884332 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.987360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.987430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.987444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.987466 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:40 crc kubenswrapper[4870]: I0130 08:10:40.987483 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:40Z","lastTransitionTime":"2026-01-30T08:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.073985 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.074024 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.074001 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.074128 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:41 crc kubenswrapper[4870]: E0130 08:10:41.074154 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:41 crc kubenswrapper[4870]: E0130 08:10:41.074261 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:41 crc kubenswrapper[4870]: E0130 08:10:41.074426 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:41 crc kubenswrapper[4870]: E0130 08:10:41.074564 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.079935 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:26:21.890386248 +0000 UTC Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.090525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.090560 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.090571 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.090588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.090601 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.192611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.192644 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.192653 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.192667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.192676 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.295403 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.295450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.295461 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.295481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.295493 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.398128 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.398213 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.398239 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.398270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.398292 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.501644 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.501705 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.501719 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.501741 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.501762 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.604923 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.604979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.604993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.605011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.605025 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.708092 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.708143 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.708171 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.708191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.708203 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.810175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.810216 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.810225 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.810238 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.810247 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.913126 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.913191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.913202 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.913235 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:41 crc kubenswrapper[4870]: I0130 08:10:41.913248 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:41Z","lastTransitionTime":"2026-01-30T08:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.015365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.015407 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.015419 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.015436 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.015450 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.080307 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:30:13.061444685 +0000 UTC Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.093845 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.112632 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.117575 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.117626 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.117640 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.117658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.117669 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.133284 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.153404 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.180456 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.199178 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.214816 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.225040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.225360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.225554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.225782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.226014 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.232149 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.248303 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.262070 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.276545 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"2026-01-30T08:09:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431\\\\n2026-01-30T08:09:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431 to /host/opt/cni/bin/\\\\n2026-01-30T08:09:50Z [verbose] multus-daemon started\\\\n2026-01-30T08:09:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:10:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.294167 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.307969 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.328592 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.328636 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.328647 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.328664 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.328676 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.332573 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.346708 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.362681 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.382677 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.409322 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.432434 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.432490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.432502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.432521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.432535 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.535237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.535295 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.535307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.535328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.535340 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.637831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.637942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.637952 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.637970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.637980 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.740448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.740493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.740502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.740517 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.740528 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.843659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.843706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.843716 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.843732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.843743 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.947920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.947973 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.947990 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.948012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:42 crc kubenswrapper[4870]: I0130 08:10:42.948029 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:42Z","lastTransitionTime":"2026-01-30T08:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.050732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.050792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.050983 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.051055 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.051085 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.073800 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.073918 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.074015 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:43 crc kubenswrapper[4870]: E0130 08:10:43.074245 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.074350 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:43 crc kubenswrapper[4870]: E0130 08:10:43.074474 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:43 crc kubenswrapper[4870]: E0130 08:10:43.075377 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:43 crc kubenswrapper[4870]: E0130 08:10:43.075506 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.075981 4870 scope.go:117] "RemoveContainer" containerID="1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.081427 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:41:50.574799766 +0000 UTC Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.155324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.155801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.155813 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.155832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.155845 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.258682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.258735 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.258747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.258772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.258785 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.361104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.361148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.361162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.361180 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.361192 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.464043 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.464091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.464112 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.464135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.464150 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.568241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.568310 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.568345 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.568386 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.568407 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.594951 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/2.log" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.615419 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.616296 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.639256 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.657856 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.672218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.672268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.672282 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.672302 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.672315 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.681594 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.698228 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.709939 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.724532 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"2026-01-30T08:09:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431\\\\n2026-01-30T08:09:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431 to /host/opt/cni/bin/\\\\n2026-01-30T08:09:50Z [verbose] multus-daemon started\\\\n2026-01-30T08:09:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:10:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.737643 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.764793 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.782635 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.793748 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.830327 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.836670 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.836711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.836722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.836740 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.836752 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.841983 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.861077 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.871895 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.883677 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.895725 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.906123 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.915980 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:43Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.938923 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.938963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.938980 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.939002 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:43 crc kubenswrapper[4870]: I0130 08:10:43.939019 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:43Z","lastTransitionTime":"2026-01-30T08:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.041389 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.041444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.041453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.041472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.041482 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.082496 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:37:22.24357326 +0000 UTC Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.144868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.144941 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.144953 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.144974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.144985 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.248762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.248803 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.248818 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.248838 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.248852 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.352159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.352264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.352278 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.352307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.352323 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.455059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.455118 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.455132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.455152 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.455164 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.558511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.558594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.558627 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.558658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.558683 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.623307 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/3.log" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.624077 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/2.log" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.628327 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" exitCode=1 Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.628394 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.628445 4870 scope.go:117] "RemoveContainer" containerID="1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.629083 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:10:44 crc kubenswrapper[4870]: E0130 08:10:44.629286 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.647251 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.662095 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.662191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.662213 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.662282 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.662304 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.665610 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.684432 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.705738 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.724304 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.740908 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.761139 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.766555 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.766591 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.766608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.766628 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.766642 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.775768 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.792329 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.823465 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.848911 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.868056 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.870417 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.870448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.870460 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.870476 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.870488 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.883315 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.896434 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"2026-01-30T08:09:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431\\\\n2026-01-30T08:09:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431 to /host/opt/cni/bin/\\\\n2026-01-30T08:09:50Z [verbose] multus-daemon started\\\\n2026-01-30T08:09:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:10:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.898798 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:10:44 crc kubenswrapper[4870]: E0130 08:10:44.898947 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:48.898925314 +0000 UTC m=+147.594472433 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.913232 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.935569 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da023d97f1e045ab17e1a0d5c12dea48d7a49ef948846242561a014653e7a1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:19Z\\\",\\\"message\\\":\\\"ne-config-daemon-j4sd8\\\\nI0130 08:10:19.196562 6529 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 in node crc\\\\nF0130 08:10:19.196560 6529 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:19Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:10:19.196573 6529 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8 after 0 failed attempt(s)\\\\nI0130 08:10:19.196569 6529 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-8kvt7\\\\nI0130 08:10:19.196583 6529 obj_retry.go:303] Retry object setup: *v1.Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:44Z\\\",\\\"message\\\":\\\"ontroller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-cj5db\\\\nI0130 08:10:44.228252 6956 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:10:44.228262 6956 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:10:44.228249 6956 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0130 08:10:44.228275 6956 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0130 08:10:44.228279 6956 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq\\\\nI0130 08:10:44.228289 6956 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8\\\\nI0130 08:10:44.228296 6956 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8\\\\nI0130 08:10:44.228299 6956 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq\\\\nI0130 08:10:44.228311 6956 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-mp9vw\\\\nI0130 08:10:44.2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.952191 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.966490 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:44Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.973269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.973299 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.973309 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.973325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:44 crc kubenswrapper[4870]: I0130 08:10:44.973337 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:44Z","lastTransitionTime":"2026-01-30T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.000466 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.000507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.000525 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.000559 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.000654 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.000692 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:11:49.000679655 +0000 UTC m=+147.696226774 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.000824 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.000835 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.000844 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.000864 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:11:49.000858571 +0000 UTC m=+147.696405680 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.001008 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.001035 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:11:49.001027946 +0000 UTC m=+147.696575055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.001076 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.001107 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.001123 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.001180 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:11:49.00116421 +0000 UTC m=+147.696711329 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.074223 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.074249 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.074226 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.074409 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.074215 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.074747 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.074940 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.075142 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.078006 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.078055 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.078073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.078099 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.078117 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.083154 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:14:45.785929181 +0000 UTC Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.181584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.181657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.181680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.181709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.181731 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.285317 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.285389 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.285413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.285442 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.285466 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.389151 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.389217 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.389230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.389251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.389264 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.491893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.491958 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.491973 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.491998 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.492015 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.594317 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.594356 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.594364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.594379 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.594389 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.645218 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/3.log" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.650055 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:10:45 crc kubenswrapper[4870]: E0130 08:10:45.650219 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.663302 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsmrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8e9e25-2b9b-4820-8282-48e1d930a721\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:35Z\\\",\\\"message\\\":\\\"2026-01-30T08:09:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431\\\\n2026-01-30T08:09:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63624c98-b351-4f15-8ccd-8ea961785431 to /host/opt/cni/bin/\\\\n2026-01-30T08:09:50Z [verbose] multus-daemon started\\\\n2026-01-30T08:09:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:10:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k7mcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsmrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.674046 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5c19ea869d0983bc1b4bad592a3d73b8640185351f260c21d3fb0ba97eac85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hp4df\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j4sd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.684958 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dpj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228f8bf9-7e75-4886-8441-57bc0d251413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://872d591619cf7b2e3ef588c41d9d457887802dda965a2b87ef21b1ff023b29f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9dtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dpj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.697299 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.697335 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.697346 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.697363 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.697374 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.706186 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26285d6c-2642-414d-9dc0-904549b87975\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd742081271df9e1741e1598e3017ae81c62a78f41203638f049a870e15f43cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3556e852240298ce97a0728d74120b9feec9bdab0d5450948ec31a7a64751973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03d4387439ba106308e50df06e5b0b9725e6f93fce8ea2e8159514951132572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fd268ae9709b31196dd855a430d621ba1f27e1003438f09ff6c8f9d1c3164f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://026760c59859da40f402d09c4912636bc87455cd90302b4bea238eeba0ff451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b1cbc698afc544b846c470ad20c60d804f11b6d03594bb539fcee97d547c03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://324a44e3853b8b314a5eef793617fcf46eae6ed12900e1006e78660a1aa04ca9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1233caf10c907a6e2694d76c367bbc12eb25378636b47aa4eced46222db68fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.720327 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d9bc-1d2f-4854-a0d9-0b487817c8d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500cb8fa8550901863f5e814c6c992509f42e77808722ec63c9aa9de81fd673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f8c543f5b6d1542e63c6ec4417ada801afdeb2f979adb1bfa5aef7a4efa2a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fc71b720ba0da0b93c2a74bb3f9ba338bd2206037cd1aa1e51d6cb5dd4065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.753189 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.774619 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1239efc2-d4e8-4a88-a0bf-00a685812999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9470a761f33a3cb266f70865d69fae18e7a5946ca3d15365858d914d641a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzdvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.792997 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdd7f5e-1187-4760-b2dc-98c3d3286f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d4f6f4f3d0bcb6476b76a9154e3cdf46867cb903cc02180043802658b6e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ebec6d66bdcb97ce260a1d7c16eb1b7db9b39b9a73b5f51476798bfba7c0fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0747fc4e1ccfe703e05a090c3f1bc907110788691c6b772ce07e078091bc77f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4b67ac15165704d2807acc0575ae3504c3cf5073195e6e4323edd116925e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199f858676f482865c8bf20140fdae1afbda8fbccec75486883fa04da772369e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05ff9365b75b0e4851937cc8f4709b165562ed487f13c5ecf352597a1a35f54f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bd4ef1a577999ab8134e6571f97a98adb00d3e27625748b7cff7df460683ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rrkfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.799074 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.799098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.799107 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.799119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.799129 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.812716 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36037609-52f9-4c09-8beb-6d35a039347b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:10:44Z\\\",\\\"message\\\":\\\"ontroller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-cj5db\\\\nI0130 08:10:44.228252 6956 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:10:44.228262 6956 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:10:44.228249 6956 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0130 08:10:44.228275 6956 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0130 08:10:44.228279 6956 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq\\\\nI0130 08:10:44.228289 6956 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8\\\\nI0130 08:10:44.228296 6956 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-j4sd8\\\\nI0130 08:10:44.228299 6956 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq\\\\nI0130 08:10:44.228311 6956 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-mp9vw\\\\nI0130 08:10:44.2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:10:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk5ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj5db\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.823654 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b66c5f2c-1e0e-4d09-ab12-8cd255f29aa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe7ab8f2efaea80fb8f85a7d33f79d70eb848a65d1fc6dd1cacacc479c1065c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a87e50cd50d8225874491d677cc6961215e485310948984353dda93ab97eced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fglbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5xdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.833245 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b976744-b72d-4291-a32f-437fc1cfbf03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx29n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:10:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mp9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.845323 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8759add-f10c-406f-a1ee-1a4530b369b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb553cf7aed8c067e432248cd08fdd7db46f939c608b18c4e25737b8d115d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11111845069f3d8452723ee5954d1f68e314872eda1d204abf733f4871a7ec40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3033803f0aa71bcbc5737ac85911714c8cd9c224b2cb7813a3876e454e081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902d411d492e06565800c28dd1a711b56345ef091ae08543b17f283632d9176d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.859119 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:09:40Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:09:25.785738 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:09:25.787572 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1870838884/tls.crt::/tmp/serving-cert-1870838884/tls.key\\\\\\\"\\\\nI0130 08:09:40.862942 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:09:40.877393 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:09:40.877434 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:09:40.877467 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:09:40.877474 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:09:40.887749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 08:09:40.887801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:09:40.887814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:09:40.887818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:09:40.887821 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:09:40.887824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 08:09:40.888404 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 08:09:40.890397 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:09:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:09:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.874216 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6658972f2f477703359c1ea09898f96fb1277b0210e37599d80f60cece3184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3fe7b04f825d7b37b1e1312ba6cf36ead423eb02de739ad0e8e1ec0a4dd715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.886143 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d7a43b33ec817283e60b75e27d05e6050252a3b7a4db41c0b76052400301f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.899214 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.901007 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.901029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.901038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.901052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.901061 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:45Z","lastTransitionTime":"2026-01-30T08:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.910957 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:45 crc kubenswrapper[4870]: I0130 08:10:45.922198 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c80dd1abdfdcca0a0e1a53bacfd6288b58888da634a1d551d6d47fa60b7766ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:10:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.003718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.003751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.003759 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.003774 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.003784 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.083747 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:18:58.180615578 +0000 UTC Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.109729 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.109788 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.109800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.109819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.109837 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.214115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.214175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.214192 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.214216 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.214232 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.316997 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.317043 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.317058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.317079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.317094 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.420094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.420143 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.420159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.420182 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.420199 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.522913 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.522965 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.522982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.523006 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.523023 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.626269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.626340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.626359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.626382 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.626397 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.729163 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.729216 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.729236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.729264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.729283 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.832723 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.832776 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.832793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.832815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.832833 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.936119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.936198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.936211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.936228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:46 crc kubenswrapper[4870]: I0130 08:10:46.936238 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:46Z","lastTransitionTime":"2026-01-30T08:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.039193 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.039270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.039283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.039306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.039319 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.073946 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.074017 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.074037 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.074088 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:47 crc kubenswrapper[4870]: E0130 08:10:47.074148 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:47 crc kubenswrapper[4870]: E0130 08:10:47.074369 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:47 crc kubenswrapper[4870]: E0130 08:10:47.074520 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:47 crc kubenswrapper[4870]: E0130 08:10:47.074630 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.084231 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:25:52.741342434 +0000 UTC Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.142283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.142323 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.142332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.142365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.142378 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.244759 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.244826 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.244848 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.244895 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.244914 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.348343 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.348428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.348441 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.348460 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.348474 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.452004 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.452051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.452062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.452083 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.452099 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.556133 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.556210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.556226 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.556253 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.556271 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.659124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.659177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.659191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.659208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.659222 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.762269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.762325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.762342 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.762363 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.762377 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.865012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.865088 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.865101 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.865125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.865140 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.967833 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.967886 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.967894 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.967906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:47 crc kubenswrapper[4870]: I0130 08:10:47.967916 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:47Z","lastTransitionTime":"2026-01-30T08:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.070344 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.070394 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.070410 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.070433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.070451 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.085234 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:03:34.626757545 +0000 UTC Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.173588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.173625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.173632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.173646 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.173655 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.276532 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.276590 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.276613 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.276662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.276695 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.380564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.380634 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.380656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.380682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.380703 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.483584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.483655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.483676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.483700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.483713 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.586671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.586708 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.586718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.586735 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.586747 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.689455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.689499 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.689511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.689527 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.689539 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.792659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.792700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.792711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.792726 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.792737 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.895048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.895116 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.895140 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.895162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:48 crc kubenswrapper[4870]: I0130 08:10:48.895178 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:48.998507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:48.998559 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:48.998634 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:48.998651 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:48.998667 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:48Z","lastTransitionTime":"2026-01-30T08:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.073541 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.073588 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.073622 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:49 crc kubenswrapper[4870]: E0130 08:10:49.073752 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.073791 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:49 crc kubenswrapper[4870]: E0130 08:10:49.073917 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:49 crc kubenswrapper[4870]: E0130 08:10:49.074070 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:49 crc kubenswrapper[4870]: E0130 08:10:49.074208 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.085955 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:07:08.930196246 +0000 UTC Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.101222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.101575 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.101768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.102016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.102234 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:49Z","lastTransitionTime":"2026-01-30T08:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.205604 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.205652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.205663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.205686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.205701 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:49Z","lastTransitionTime":"2026-01-30T08:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.229209 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.229346 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.229461 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.229655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.229750 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:10:49Z","lastTransitionTime":"2026-01-30T08:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.296584 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts"] Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.297283 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.300041 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.303711 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.304116 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.304700 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.370202 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rrkfz" podStartSLOduration=62.370160133 podStartE2EDuration="1m2.370160133s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.338845741 +0000 UTC m=+88.034392850" watchObservedRunningTime="2026-01-30 08:10:49.370160133 +0000 UTC m=+88.065707242" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.384018 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5xdq" podStartSLOduration=62.383990986 podStartE2EDuration="1m2.383990986s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.383338746 +0000 UTC m=+88.078885865" watchObservedRunningTime="2026-01-30 08:10:49.383990986 +0000 UTC m=+88.079538095" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.429151 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.429134622 podStartE2EDuration="36.429134622s" podCreationTimestamp="2026-01-30 08:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.429106391 +0000 UTC m=+88.124653500" watchObservedRunningTime="2026-01-30 08:10:49.429134622 +0000 UTC m=+88.124681731" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.447033 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.447004153 podStartE2EDuration="1m8.447004153s" podCreationTimestamp="2026-01-30 08:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.4465923 +0000 UTC m=+88.142139429" watchObservedRunningTime="2026-01-30 08:10:49.447004153 +0000 UTC m=+88.142551262" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.455397 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ee0907b6-d3b0-44d3-b153-a06bd6922390-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.455462 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ee0907b6-d3b0-44d3-b153-a06bd6922390-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.455503 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0907b6-d3b0-44d3-b153-a06bd6922390-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.455534 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0907b6-d3b0-44d3-b153-a06bd6922390-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.455553 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee0907b6-d3b0-44d3-b153-a06bd6922390-service-ca\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.530717 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8kvt7" podStartSLOduration=62.530682737 podStartE2EDuration="1m2.530682737s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.529457168 +0000 UTC m=+88.225004287" watchObservedRunningTime="2026-01-30 08:10:49.530682737 +0000 UTC m=+88.226229846" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.545734 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hsmrb" podStartSLOduration=62.545712348 podStartE2EDuration="1m2.545712348s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.545610805 +0000 UTC m=+88.241157924" watchObservedRunningTime="2026-01-30 08:10:49.545712348 +0000 UTC m=+88.241259457" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.556792 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ee0907b6-d3b0-44d3-b153-a06bd6922390-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.557095 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ee0907b6-d3b0-44d3-b153-a06bd6922390-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.557174 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ee0907b6-d3b0-44d3-b153-a06bd6922390-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.557014 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ee0907b6-d3b0-44d3-b153-a06bd6922390-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.557193 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0907b6-d3b0-44d3-b153-a06bd6922390-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.557434 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0907b6-d3b0-44d3-b153-a06bd6922390-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.557490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee0907b6-d3b0-44d3-b153-a06bd6922390-service-ca\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.558436 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee0907b6-d3b0-44d3-b153-a06bd6922390-service-ca\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.560923 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podStartSLOduration=62.560908135 podStartE2EDuration="1m2.560908135s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.560422509 +0000 UTC m=+88.255969648" watchObservedRunningTime="2026-01-30 08:10:49.560908135 +0000 UTC m=+88.256455244" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.572605 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0907b6-d3b0-44d3-b153-a06bd6922390-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.577178 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0907b6-d3b0-44d3-b153-a06bd6922390-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-np8ts\" (UID: \"ee0907b6-d3b0-44d3-b153-a06bd6922390\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.609794 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.609769586 podStartE2EDuration="1m8.609769586s" podCreationTimestamp="2026-01-30 08:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.608246869 +0000 UTC m=+88.303793978" watchObservedRunningTime="2026-01-30 08:10:49.609769586 +0000 UTC m=+88.305316685" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.612257 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dpj7j" podStartSLOduration=62.612224234 podStartE2EDuration="1m2.612224234s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.582626006 +0000 UTC m=+88.278173135" watchObservedRunningTime="2026-01-30 08:10:49.612224234 +0000 UTC m=+88.307771353" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.615095 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.629642 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=67.629570538 podStartE2EDuration="1m7.629570538s" podCreationTimestamp="2026-01-30 08:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:49.627055179 +0000 UTC m=+88.322602288" watchObservedRunningTime="2026-01-30 08:10:49.629570538 +0000 UTC m=+88.325117647" Jan 30 08:10:49 crc kubenswrapper[4870]: W0130 08:10:49.632514 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0907b6_d3b0_44d3_b153_a06bd6922390.slice/crio-4ec29929bd8f01f0b4fa59f1ffb310c74c4006f44dbf7e13c2704860e0a334b6 WatchSource:0}: Error finding container 4ec29929bd8f01f0b4fa59f1ffb310c74c4006f44dbf7e13c2704860e0a334b6: Status 404 returned error can't find the container with id 4ec29929bd8f01f0b4fa59f1ffb310c74c4006f44dbf7e13c2704860e0a334b6 Jan 30 08:10:49 crc kubenswrapper[4870]: I0130 08:10:49.662499 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" event={"ID":"ee0907b6-d3b0-44d3-b153-a06bd6922390","Type":"ContainerStarted","Data":"4ec29929bd8f01f0b4fa59f1ffb310c74c4006f44dbf7e13c2704860e0a334b6"} Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.087232 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:20:20.048701557 +0000 UTC Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.087696 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.087513 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.094479 4870 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.666657 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" event={"ID":"ee0907b6-d3b0-44d3-b153-a06bd6922390","Type":"ContainerStarted","Data":"cd73d9c53fc6d9653ff691866e80fec8f89ff677dd18e02d2a07d7c56a5901bc"} Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.700666 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.700640347 podStartE2EDuration="700.640347ms" podCreationTimestamp="2026-01-30 08:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:50.68192708 +0000 UTC m=+89.377474229" watchObservedRunningTime="2026-01-30 08:10:50.700640347 +0000 UTC m=+89.396187496" Jan 30 08:10:50 crc kubenswrapper[4870]: I0130 08:10:50.701625 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-np8ts" podStartSLOduration=63.701614147 podStartE2EDuration="1m3.701614147s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:10:50.700335427 +0000 UTC m=+89.395882596" watchObservedRunningTime="2026-01-30 08:10:50.701614147 +0000 UTC m=+89.397161286" Jan 30 08:10:51 crc kubenswrapper[4870]: I0130 08:10:51.074326 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:51 crc kubenswrapper[4870]: I0130 08:10:51.074431 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:51 crc kubenswrapper[4870]: E0130 08:10:51.074477 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:51 crc kubenswrapper[4870]: I0130 08:10:51.074436 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:51 crc kubenswrapper[4870]: E0130 08:10:51.074563 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:51 crc kubenswrapper[4870]: I0130 08:10:51.074596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:51 crc kubenswrapper[4870]: E0130 08:10:51.074696 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:51 crc kubenswrapper[4870]: E0130 08:10:51.074781 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:53 crc kubenswrapper[4870]: I0130 08:10:53.074034 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:53 crc kubenswrapper[4870]: I0130 08:10:53.074060 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:53 crc kubenswrapper[4870]: I0130 08:10:53.074036 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:53 crc kubenswrapper[4870]: I0130 08:10:53.074108 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:53 crc kubenswrapper[4870]: E0130 08:10:53.074248 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:53 crc kubenswrapper[4870]: E0130 08:10:53.074312 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:53 crc kubenswrapper[4870]: E0130 08:10:53.074379 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:53 crc kubenswrapper[4870]: E0130 08:10:53.074438 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:55 crc kubenswrapper[4870]: I0130 08:10:55.074001 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:55 crc kubenswrapper[4870]: I0130 08:10:55.074053 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:55 crc kubenswrapper[4870]: I0130 08:10:55.074166 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:55 crc kubenswrapper[4870]: E0130 08:10:55.074183 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:55 crc kubenswrapper[4870]: I0130 08:10:55.074213 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:55 crc kubenswrapper[4870]: E0130 08:10:55.074311 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:55 crc kubenswrapper[4870]: E0130 08:10:55.074401 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:55 crc kubenswrapper[4870]: E0130 08:10:55.074460 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:56 crc kubenswrapper[4870]: I0130 08:10:56.075596 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:10:56 crc kubenswrapper[4870]: E0130 08:10:56.076139 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:10:57 crc kubenswrapper[4870]: I0130 08:10:57.074489 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:57 crc kubenswrapper[4870]: E0130 08:10:57.074636 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:10:57 crc kubenswrapper[4870]: I0130 08:10:57.074672 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:57 crc kubenswrapper[4870]: E0130 08:10:57.074780 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:57 crc kubenswrapper[4870]: I0130 08:10:57.074924 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:57 crc kubenswrapper[4870]: I0130 08:10:57.074938 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:57 crc kubenswrapper[4870]: E0130 08:10:57.074967 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:57 crc kubenswrapper[4870]: E0130 08:10:57.075142 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:59 crc kubenswrapper[4870]: I0130 08:10:59.073517 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:10:59 crc kubenswrapper[4870]: E0130 08:10:59.073933 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:10:59 crc kubenswrapper[4870]: I0130 08:10:59.073596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:10:59 crc kubenswrapper[4870]: I0130 08:10:59.073596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:10:59 crc kubenswrapper[4870]: E0130 08:10:59.074005 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:10:59 crc kubenswrapper[4870]: I0130 08:10:59.073630 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:10:59 crc kubenswrapper[4870]: E0130 08:10:59.074160 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:10:59 crc kubenswrapper[4870]: E0130 08:10:59.074270 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:01 crc kubenswrapper[4870]: I0130 08:11:01.073986 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:01 crc kubenswrapper[4870]: I0130 08:11:01.074021 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:01 crc kubenswrapper[4870]: I0130 08:11:01.074098 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:01 crc kubenswrapper[4870]: E0130 08:11:01.074184 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:01 crc kubenswrapper[4870]: I0130 08:11:01.074224 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:01 crc kubenswrapper[4870]: E0130 08:11:01.074298 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:01 crc kubenswrapper[4870]: E0130 08:11:01.074397 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:01 crc kubenswrapper[4870]: E0130 08:11:01.074447 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:03 crc kubenswrapper[4870]: I0130 08:11:03.074463 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:03 crc kubenswrapper[4870]: E0130 08:11:03.075319 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:03 crc kubenswrapper[4870]: I0130 08:11:03.074547 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:03 crc kubenswrapper[4870]: I0130 08:11:03.074602 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:03 crc kubenswrapper[4870]: E0130 08:11:03.075718 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:03 crc kubenswrapper[4870]: I0130 08:11:03.074514 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:03 crc kubenswrapper[4870]: E0130 08:11:03.075862 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:03 crc kubenswrapper[4870]: E0130 08:11:03.076353 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:05 crc kubenswrapper[4870]: I0130 08:11:05.074732 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:05 crc kubenswrapper[4870]: I0130 08:11:05.074739 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:05 crc kubenswrapper[4870]: E0130 08:11:05.074974 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:05 crc kubenswrapper[4870]: I0130 08:11:05.074772 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:05 crc kubenswrapper[4870]: I0130 08:11:05.074726 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:05 crc kubenswrapper[4870]: E0130 08:11:05.075200 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:05 crc kubenswrapper[4870]: E0130 08:11:05.075233 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:05 crc kubenswrapper[4870]: E0130 08:11:05.075295 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:05 crc kubenswrapper[4870]: I0130 08:11:05.548985 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:05 crc kubenswrapper[4870]: E0130 08:11:05.549188 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:11:05 crc kubenswrapper[4870]: E0130 08:11:05.549289 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs podName:7b976744-b72d-4291-a32f-437fc1cfbf03 nodeName:}" failed. No retries permitted until 2026-01-30 08:12:09.549265883 +0000 UTC m=+168.244813172 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs") pod "network-metrics-daemon-mp9vw" (UID: "7b976744-b72d-4291-a32f-437fc1cfbf03") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:11:07 crc kubenswrapper[4870]: I0130 08:11:07.074513 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:07 crc kubenswrapper[4870]: I0130 08:11:07.074558 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:07 crc kubenswrapper[4870]: I0130 08:11:07.074589 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:07 crc kubenswrapper[4870]: E0130 08:11:07.074661 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:07 crc kubenswrapper[4870]: I0130 08:11:07.074690 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:07 crc kubenswrapper[4870]: E0130 08:11:07.074775 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:07 crc kubenswrapper[4870]: E0130 08:11:07.075198 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:07 crc kubenswrapper[4870]: E0130 08:11:07.075175 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:08 crc kubenswrapper[4870]: I0130 08:11:08.075646 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:11:08 crc kubenswrapper[4870]: E0130 08:11:08.076000 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:11:09 crc kubenswrapper[4870]: I0130 08:11:09.074273 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:09 crc kubenswrapper[4870]: I0130 08:11:09.074355 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:09 crc kubenswrapper[4870]: I0130 08:11:09.074379 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:09 crc kubenswrapper[4870]: E0130 08:11:09.074467 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:09 crc kubenswrapper[4870]: I0130 08:11:09.074508 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:09 crc kubenswrapper[4870]: E0130 08:11:09.074583 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:09 crc kubenswrapper[4870]: E0130 08:11:09.074687 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:09 crc kubenswrapper[4870]: E0130 08:11:09.074807 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:11 crc kubenswrapper[4870]: I0130 08:11:11.073564 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:11 crc kubenswrapper[4870]: I0130 08:11:11.073602 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:11 crc kubenswrapper[4870]: I0130 08:11:11.073613 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:11 crc kubenswrapper[4870]: I0130 08:11:11.073753 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:11 crc kubenswrapper[4870]: E0130 08:11:11.073907 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:11 crc kubenswrapper[4870]: E0130 08:11:11.074138 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:11 crc kubenswrapper[4870]: E0130 08:11:11.074279 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:11 crc kubenswrapper[4870]: E0130 08:11:11.074362 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:13 crc kubenswrapper[4870]: I0130 08:11:13.073696 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:13 crc kubenswrapper[4870]: I0130 08:11:13.073701 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:13 crc kubenswrapper[4870]: I0130 08:11:13.073741 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:13 crc kubenswrapper[4870]: I0130 08:11:13.073707 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:13 crc kubenswrapper[4870]: E0130 08:11:13.074065 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:13 crc kubenswrapper[4870]: E0130 08:11:13.074189 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:13 crc kubenswrapper[4870]: E0130 08:11:13.074359 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:13 crc kubenswrapper[4870]: E0130 08:11:13.074555 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:15 crc kubenswrapper[4870]: I0130 08:11:15.074666 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:15 crc kubenswrapper[4870]: I0130 08:11:15.074664 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:15 crc kubenswrapper[4870]: I0130 08:11:15.074716 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:15 crc kubenswrapper[4870]: E0130 08:11:15.074921 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:15 crc kubenswrapper[4870]: I0130 08:11:15.074993 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:15 crc kubenswrapper[4870]: E0130 08:11:15.075063 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:15 crc kubenswrapper[4870]: E0130 08:11:15.075160 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:15 crc kubenswrapper[4870]: E0130 08:11:15.075499 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:17 crc kubenswrapper[4870]: I0130 08:11:17.073808 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:17 crc kubenswrapper[4870]: I0130 08:11:17.073848 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:17 crc kubenswrapper[4870]: I0130 08:11:17.073955 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:17 crc kubenswrapper[4870]: E0130 08:11:17.075092 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:17 crc kubenswrapper[4870]: E0130 08:11:17.075221 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:17 crc kubenswrapper[4870]: I0130 08:11:17.073988 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:17 crc kubenswrapper[4870]: E0130 08:11:17.075444 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:17 crc kubenswrapper[4870]: E0130 08:11:17.075795 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:19 crc kubenswrapper[4870]: I0130 08:11:19.074037 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:19 crc kubenswrapper[4870]: I0130 08:11:19.074176 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:19 crc kubenswrapper[4870]: I0130 08:11:19.074208 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:19 crc kubenswrapper[4870]: E0130 08:11:19.074362 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:19 crc kubenswrapper[4870]: E0130 08:11:19.074566 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:19 crc kubenswrapper[4870]: E0130 08:11:19.074621 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:19 crc kubenswrapper[4870]: I0130 08:11:19.075015 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:19 crc kubenswrapper[4870]: E0130 08:11:19.075243 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.074763 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.074805 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.074839 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:21 crc kubenswrapper[4870]: E0130 08:11:21.074962 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.075038 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:21 crc kubenswrapper[4870]: E0130 08:11:21.075125 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:21 crc kubenswrapper[4870]: E0130 08:11:21.075237 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:21 crc kubenswrapper[4870]: E0130 08:11:21.075374 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.077650 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:11:21 crc kubenswrapper[4870]: E0130 08:11:21.078109 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj5db_openshift-ovn-kubernetes(36037609-52f9-4c09-8beb-6d35a039347b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.870657 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/1.log" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.871666 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/0.log" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.871738 4870 generic.go:334] "Generic (PLEG): container finished" podID="3e8e9e25-2b9b-4820-8282-48e1d930a721" containerID="e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6" exitCode=1 Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.871787 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerDied","Data":"e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6"} Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.871838 4870 scope.go:117] "RemoveContainer" containerID="f51ff6706b07d9e1323a28f982e208fa5e28cf7193da7addb0bb82616f72aa1a" Jan 30 08:11:21 crc kubenswrapper[4870]: I0130 08:11:21.872672 4870 scope.go:117] "RemoveContainer" containerID="e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6" Jan 30 08:11:21 crc kubenswrapper[4870]: E0130 08:11:21.873041 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hsmrb_openshift-multus(3e8e9e25-2b9b-4820-8282-48e1d930a721)\"" pod="openshift-multus/multus-hsmrb" podUID="3e8e9e25-2b9b-4820-8282-48e1d930a721" Jan 30 08:11:22 crc kubenswrapper[4870]: E0130 08:11:22.044155 4870 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 08:11:22 crc kubenswrapper[4870]: E0130 08:11:22.181647 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:11:22 crc kubenswrapper[4870]: I0130 08:11:22.879595 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/1.log" Jan 30 08:11:23 crc kubenswrapper[4870]: I0130 08:11:23.074619 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:23 crc kubenswrapper[4870]: I0130 08:11:23.074746 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:23 crc kubenswrapper[4870]: I0130 08:11:23.074918 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:23 crc kubenswrapper[4870]: I0130 08:11:23.075151 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:23 crc kubenswrapper[4870]: E0130 08:11:23.075125 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:23 crc kubenswrapper[4870]: E0130 08:11:23.075398 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:23 crc kubenswrapper[4870]: E0130 08:11:23.075451 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:23 crc kubenswrapper[4870]: E0130 08:11:23.075552 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:25 crc kubenswrapper[4870]: I0130 08:11:25.073969 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:25 crc kubenswrapper[4870]: I0130 08:11:25.074043 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:25 crc kubenswrapper[4870]: I0130 08:11:25.074058 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:25 crc kubenswrapper[4870]: I0130 08:11:25.074059 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:25 crc kubenswrapper[4870]: E0130 08:11:25.074241 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:25 crc kubenswrapper[4870]: E0130 08:11:25.074373 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:25 crc kubenswrapper[4870]: E0130 08:11:25.074481 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:25 crc kubenswrapper[4870]: E0130 08:11:25.074598 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:27 crc kubenswrapper[4870]: I0130 08:11:27.074136 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:27 crc kubenswrapper[4870]: I0130 08:11:27.074160 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:27 crc kubenswrapper[4870]: E0130 08:11:27.074359 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:27 crc kubenswrapper[4870]: I0130 08:11:27.074163 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:27 crc kubenswrapper[4870]: E0130 08:11:27.074562 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:27 crc kubenswrapper[4870]: E0130 08:11:27.074750 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:27 crc kubenswrapper[4870]: I0130 08:11:27.074954 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:27 crc kubenswrapper[4870]: E0130 08:11:27.075062 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:27 crc kubenswrapper[4870]: E0130 08:11:27.183840 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:11:29 crc kubenswrapper[4870]: I0130 08:11:29.073597 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:29 crc kubenswrapper[4870]: I0130 08:11:29.073624 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:29 crc kubenswrapper[4870]: E0130 08:11:29.074298 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:29 crc kubenswrapper[4870]: I0130 08:11:29.073767 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:29 crc kubenswrapper[4870]: I0130 08:11:29.073732 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:29 crc kubenswrapper[4870]: E0130 08:11:29.074375 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:29 crc kubenswrapper[4870]: E0130 08:11:29.074487 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:29 crc kubenswrapper[4870]: E0130 08:11:29.074663 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:31 crc kubenswrapper[4870]: I0130 08:11:31.074058 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:31 crc kubenswrapper[4870]: I0130 08:11:31.074092 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:31 crc kubenswrapper[4870]: I0130 08:11:31.074071 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:31 crc kubenswrapper[4870]: E0130 08:11:31.074217 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:31 crc kubenswrapper[4870]: I0130 08:11:31.074364 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:31 crc kubenswrapper[4870]: E0130 08:11:31.074431 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:31 crc kubenswrapper[4870]: E0130 08:11:31.074475 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:31 crc kubenswrapper[4870]: E0130 08:11:31.074538 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:32 crc kubenswrapper[4870]: E0130 08:11:32.184489 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:11:33 crc kubenswrapper[4870]: I0130 08:11:33.073692 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:33 crc kubenswrapper[4870]: I0130 08:11:33.073757 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:33 crc kubenswrapper[4870]: I0130 08:11:33.073692 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:33 crc kubenswrapper[4870]: I0130 08:11:33.073864 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:33 crc kubenswrapper[4870]: E0130 08:11:33.073984 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:33 crc kubenswrapper[4870]: E0130 08:11:33.074144 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:33 crc kubenswrapper[4870]: E0130 08:11:33.074250 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:33 crc kubenswrapper[4870]: E0130 08:11:33.074307 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.075215 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.926601 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/3.log" Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.930229 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerStarted","Data":"b741960d899fead07c73e8ea4b750a10bd019b223fe9d09e7a67a573f3e4bee3"} Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.930709 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.959329 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podStartSLOduration=107.95930847 podStartE2EDuration="1m47.95930847s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:34.95765572 +0000 UTC m=+133.653202849" watchObservedRunningTime="2026-01-30 08:11:34.95930847 +0000 UTC m=+133.654855589" Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.993442 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mp9vw"] Jan 30 08:11:34 crc kubenswrapper[4870]: I0130 08:11:34.993653 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:34 crc kubenswrapper[4870]: E0130 08:11:34.993773 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:35 crc kubenswrapper[4870]: I0130 08:11:35.074301 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:35 crc kubenswrapper[4870]: I0130 08:11:35.074382 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:35 crc kubenswrapper[4870]: E0130 08:11:35.074427 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:35 crc kubenswrapper[4870]: I0130 08:11:35.074617 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:35 crc kubenswrapper[4870]: E0130 08:11:35.074603 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:35 crc kubenswrapper[4870]: E0130 08:11:35.074671 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:36 crc kubenswrapper[4870]: I0130 08:11:36.074708 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:36 crc kubenswrapper[4870]: E0130 08:11:36.074924 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:37 crc kubenswrapper[4870]: I0130 08:11:37.074500 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:37 crc kubenswrapper[4870]: I0130 08:11:37.074591 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:37 crc kubenswrapper[4870]: I0130 08:11:37.074618 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:37 crc kubenswrapper[4870]: E0130 08:11:37.074686 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:37 crc kubenswrapper[4870]: E0130 08:11:37.074896 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:37 crc kubenswrapper[4870]: E0130 08:11:37.075060 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:37 crc kubenswrapper[4870]: I0130 08:11:37.075550 4870 scope.go:117] "RemoveContainer" containerID="e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6" Jan 30 08:11:37 crc kubenswrapper[4870]: E0130 08:11:37.185653 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:11:37 crc kubenswrapper[4870]: I0130 08:11:37.945382 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/1.log" Jan 30 08:11:37 crc kubenswrapper[4870]: I0130 08:11:37.945454 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerStarted","Data":"61538cdbec39ead4232db7d69f7b41605b3dfdb222395b4c93251e6ded8b3e41"} Jan 30 08:11:38 crc kubenswrapper[4870]: I0130 08:11:38.074064 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:38 crc kubenswrapper[4870]: E0130 08:11:38.074315 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:39 crc kubenswrapper[4870]: I0130 08:11:39.074091 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:39 crc kubenswrapper[4870]: I0130 08:11:39.074161 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:39 crc kubenswrapper[4870]: E0130 08:11:39.074306 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:39 crc kubenswrapper[4870]: I0130 08:11:39.074606 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:39 crc kubenswrapper[4870]: E0130 08:11:39.074708 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:39 crc kubenswrapper[4870]: E0130 08:11:39.075077 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:40 crc kubenswrapper[4870]: I0130 08:11:40.074709 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:40 crc kubenswrapper[4870]: E0130 08:11:40.074980 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:41 crc kubenswrapper[4870]: I0130 08:11:41.073859 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:41 crc kubenswrapper[4870]: I0130 08:11:41.073898 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:41 crc kubenswrapper[4870]: E0130 08:11:41.074409 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:11:41 crc kubenswrapper[4870]: I0130 08:11:41.073932 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:41 crc kubenswrapper[4870]: E0130 08:11:41.074560 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:11:41 crc kubenswrapper[4870]: E0130 08:11:41.074759 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:11:42 crc kubenswrapper[4870]: I0130 08:11:42.075241 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:42 crc kubenswrapper[4870]: E0130 08:11:42.076011 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mp9vw" podUID="7b976744-b72d-4291-a32f-437fc1cfbf03" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.074027 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.074113 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.074039 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.076845 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.077002 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.077187 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 08:11:43 crc kubenswrapper[4870]: I0130 08:11:43.077645 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 08:11:44 crc kubenswrapper[4870]: I0130 08:11:44.074118 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:11:44 crc kubenswrapper[4870]: I0130 08:11:44.077783 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 08:11:44 crc kubenswrapper[4870]: I0130 08:11:44.077865 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 08:11:48 crc kubenswrapper[4870]: I0130 08:11:48.939506 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:48 crc kubenswrapper[4870]: E0130 08:11:48.939725 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:13:50.939683142 +0000 UTC m=+269.635230261 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.041349 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.041405 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.041431 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.041453 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.042642 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.048504 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.049300 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.050046 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.095648 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.110588 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:49 crc kubenswrapper[4870]: I0130 08:11:49.117530 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:11:49 crc kubenswrapper[4870]: W0130 08:11:49.341522 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-82e1818833b47c8b1e15c8e8b7294b1b88861e6ac1525cbba927937ed04a77fe WatchSource:0}: Error finding container 82e1818833b47c8b1e15c8e8b7294b1b88861e6ac1525cbba927937ed04a77fe: Status 404 returned error can't find the container with id 82e1818833b47c8b1e15c8e8b7294b1b88861e6ac1525cbba927937ed04a77fe Jan 30 08:11:49 crc kubenswrapper[4870]: W0130 08:11:49.349105 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-bb4b150b8a7ebc4a00e0d09cb44c52a1363567f24e299f34f5b786cfeb3e4a05 WatchSource:0}: Error finding container bb4b150b8a7ebc4a00e0d09cb44c52a1363567f24e299f34f5b786cfeb3e4a05: Status 404 returned error can't find the container with id bb4b150b8a7ebc4a00e0d09cb44c52a1363567f24e299f34f5b786cfeb3e4a05 Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.002564 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cc95b5edd8b272888f6271d7358aded96654ea07635ccee9a35987508d35b43b"} Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.005539 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8e66ca15d727c788aba5896683ae5c42367011461eb3c263e4bb4623e8a053a2"} Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.005597 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7983ec701a449c7dd8cbee8a83e699b7c4adefed08d7718b1361dcd2ef740a69"} Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.005635 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"82e1818833b47c8b1e15c8e8b7294b1b88861e6ac1525cbba927937ed04a77fe"} Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.008364 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2bf75989a5a217dae0b7f0756ecc1b7808a21afc7e0a76e1d81a29445f78c450"} Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.008482 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bb4b150b8a7ebc4a00e0d09cb44c52a1363567f24e299f34f5b786cfeb3e4a05"} Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.008785 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.755505 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.825260 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2g2tj"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.827113 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.829413 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.835557 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.835767 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.835992 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.835563 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.836464 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.838851 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kkf4z"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.839228 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.841354 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.841618 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842051 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842230 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842278 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842509 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842691 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842248 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.842929 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.843051 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xxrkx"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.843948 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.847298 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-v7bvt"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.847951 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.848036 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.848136 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.848260 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.850110 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.850948 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.851445 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.852207 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.856248 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.856539 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.856685 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.856758 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.861317 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.862082 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.862183 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cdzxd"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.862994 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.863855 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jr94b"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.864815 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.864944 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.865506 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.868945 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.869672 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.870253 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.870297 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.870484 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ssxgx"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.871022 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.871114 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.874116 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2mj87"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.874788 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.879222 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.879464 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.879558 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sfs65"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.879776 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.880425 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.883934 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s6768"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.884744 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.885312 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.885547 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.885635 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886020 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886247 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886335 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886020 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886433 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886335 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886257 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886739 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886796 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.886815 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.889455 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.889838 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.890244 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.890413 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.890510 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.890661 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.890951 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.891423 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.891527 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.891950 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.892399 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.906572 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.911250 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.914157 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.915021 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.915442 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.920771 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.923384 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.924500 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.929052 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.929206 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.929363 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.929367 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.960866 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.961135 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.961214 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.962726 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.963028 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.963110 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.963276 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.963477 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.963644 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.963773 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.964039 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.964105 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.964264 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966082 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-serving-cert\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966115 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-etcd-serving-ca\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966141 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62vw\" (UniqueName: \"kubernetes.io/projected/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-kube-api-access-l62vw\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966163 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw6w\" (UniqueName: \"kubernetes.io/projected/042ed63b-a1a9-4072-ae87-71b9fb98280c-kube-api-access-lpw6w\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966192 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-image-import-ca\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966212 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-audit\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966231 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x8bd\" (UniqueName: \"kubernetes.io/projected/b2cd7eb7-87cb-44dc-a01f-17985460c12c-kube-api-access-8x8bd\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966246 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-etcd-client\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966265 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pjr\" (UniqueName: \"kubernetes.io/projected/15eddd48-9a41-41cb-a284-80d01c7f8aad-kube-api-access-25pjr\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966303 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042ed63b-a1a9-4072-ae87-71b9fb98280c-config\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966322 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/042ed63b-a1a9-4072-ae87-71b9fb98280c-images\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966338 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-config\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966465 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966546 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd7eb7-87cb-44dc-a01f-17985460c12c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966579 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41ae1460-1e39-4d11-9357-3e0111521a8e-metrics-tls\") pod \"dns-operator-744455d44c-s6768\" (UID: \"41ae1460-1e39-4d11-9357-3e0111521a8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966602 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0373f9a1-1537-4f29-905a-b0fb2affc113-node-pullsecrets\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966662 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgnb8\" (UniqueName: \"kubernetes.io/projected/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-kube-api-access-rgnb8\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966694 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966702 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966714 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739bcba5-d8ef-45fe-abf9-02d74d0d093c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966752 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0373f9a1-1537-4f29-905a-b0fb2affc113-audit-dir\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966784 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739bcba5-d8ef-45fe-abf9-02d74d0d093c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966802 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966806 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/15eddd48-9a41-41cb-a284-80d01c7f8aad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966827 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-encryption-config\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966849 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-config\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966864 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/739bcba5-d8ef-45fe-abf9-02d74d0d093c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966911 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd7eb7-87cb-44dc-a01f-17985460c12c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966935 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/042ed63b-a1a9-4072-ae87-71b9fb98280c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.966966 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtnn7\" (UniqueName: \"kubernetes.io/projected/41ae1460-1e39-4d11-9357-3e0111521a8e-kube-api-access-xtnn7\") pod \"dns-operator-744455d44c-s6768\" (UID: \"41ae1460-1e39-4d11-9357-3e0111521a8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967005 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-trusted-ca\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967037 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fq7\" (UniqueName: \"kubernetes.io/projected/0373f9a1-1537-4f29-905a-b0fb2affc113-kube-api-access-66fq7\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967079 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-serving-cert\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967096 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15eddd48-9a41-41cb-a284-80d01c7f8aad-serving-cert\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967160 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967284 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967347 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967444 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967574 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967708 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967821 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.967978 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968064 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968182 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968295 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968595 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968710 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968894 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.968999 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969098 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969191 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969337 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969449 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969650 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969690 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.969894 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.970038 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.970146 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.970249 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.970349 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.972311 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.976540 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.977197 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.977348 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g6x2r"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.977953 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l6p59"] Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.978422 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.979189 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.979408 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.980627 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.981397 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.990552 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.995380 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.998199 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:11:50 crc kubenswrapper[4870]: I0130 08:11:50.998214 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.001111 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.002769 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dfwzs"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.003582 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2l7mq"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.023799 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.023899 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.023963 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.025111 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.026658 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.026940 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.028312 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.029005 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.029638 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.032775 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2g2tj"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.036950 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.050564 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.050612 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.050686 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.050564 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.050571 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.050835 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.051175 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.053937 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.054568 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.054903 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.055016 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.055658 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.056047 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.056667 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.056697 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.056709 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.056866 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.057454 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.059023 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jh9j6"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.060181 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.062498 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.063166 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ssxgx"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.063244 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.065630 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.068995 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069548 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-etcd-client\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069627 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069653 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25pjr\" (UniqueName: \"kubernetes.io/projected/15eddd48-9a41-41cb-a284-80d01c7f8aad-kube-api-access-25pjr\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069676 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042ed63b-a1a9-4072-ae87-71b9fb98280c-config\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069707 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/042ed63b-a1a9-4072-ae87-71b9fb98280c-images\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069727 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-config\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069757 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd7eb7-87cb-44dc-a01f-17985460c12c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069777 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41ae1460-1e39-4d11-9357-3e0111521a8e-metrics-tls\") pod \"dns-operator-744455d44c-s6768\" (UID: \"41ae1460-1e39-4d11-9357-3e0111521a8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069796 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0373f9a1-1537-4f29-905a-b0fb2affc113-node-pullsecrets\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069819 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgnb8\" (UniqueName: \"kubernetes.io/projected/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-kube-api-access-rgnb8\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069837 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069857 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069888 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739bcba5-d8ef-45fe-abf9-02d74d0d093c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069908 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0373f9a1-1537-4f29-905a-b0fb2affc113-audit-dir\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069935 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739bcba5-d8ef-45fe-abf9-02d74d0d093c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069957 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/15eddd48-9a41-41cb-a284-80d01c7f8aad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069974 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-encryption-config\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.069996 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-config\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070015 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/739bcba5-d8ef-45fe-abf9-02d74d0d093c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070037 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd7eb7-87cb-44dc-a01f-17985460c12c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070067 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/042ed63b-a1a9-4072-ae87-71b9fb98280c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070088 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtnn7\" (UniqueName: \"kubernetes.io/projected/41ae1460-1e39-4d11-9357-3e0111521a8e-kube-api-access-xtnn7\") pod \"dns-operator-744455d44c-s6768\" (UID: \"41ae1460-1e39-4d11-9357-3e0111521a8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070108 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-trusted-ca\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070131 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fq7\" (UniqueName: \"kubernetes.io/projected/0373f9a1-1537-4f29-905a-b0fb2affc113-kube-api-access-66fq7\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070155 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-serving-cert\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070176 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15eddd48-9a41-41cb-a284-80d01c7f8aad-serving-cert\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070202 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-serving-cert\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070223 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-etcd-serving-ca\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070246 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l62vw\" (UniqueName: \"kubernetes.io/projected/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-kube-api-access-l62vw\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.070267 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpw6w\" (UniqueName: \"kubernetes.io/projected/042ed63b-a1a9-4072-ae87-71b9fb98280c-kube-api-access-lpw6w\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.071029 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-image-import-ca\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.071051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-audit\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.071074 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x8bd\" (UniqueName: \"kubernetes.io/projected/b2cd7eb7-87cb-44dc-a01f-17985460c12c-kube-api-access-8x8bd\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.071553 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.071604 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.072180 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0373f9a1-1537-4f29-905a-b0fb2affc113-node-pullsecrets\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.072206 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.073459 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-config\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.073521 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/042ed63b-a1a9-4072-ae87-71b9fb98280c-images\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.074347 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kkf4z"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.074373 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s6768"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.075402 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd7eb7-87cb-44dc-a01f-17985460c12c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.075710 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-config\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.075937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0373f9a1-1537-4f29-905a-b0fb2affc113-audit-dir\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.076751 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739bcba5-d8ef-45fe-abf9-02d74d0d093c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.076825 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.076967 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/15eddd48-9a41-41cb-a284-80d01c7f8aad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.077762 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-etcd-serving-ca\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.079008 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-image-import-ca\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.079364 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-encryption-config\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.079529 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0373f9a1-1537-4f29-905a-b0fb2affc113-audit\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.081338 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2mj87"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.082911 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042ed63b-a1a9-4072-ae87-71b9fb98280c-config\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.083265 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mnsdp"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.083486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-trusted-ca\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.085365 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.085528 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.086112 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cdzxd"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.086440 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739bcba5-d8ef-45fe-abf9-02d74d0d093c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.089950 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-serving-cert\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.089973 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41ae1460-1e39-4d11-9357-3e0111521a8e-metrics-tls\") pod \"dns-operator-744455d44c-s6768\" (UID: \"41ae1460-1e39-4d11-9357-3e0111521a8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.090850 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.091316 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.093184 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15eddd48-9a41-41cb-a284-80d01c7f8aad-serving-cert\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.093633 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd7eb7-87cb-44dc-a01f-17985460c12c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.094932 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jr94b"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.097720 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v7bvt"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.102866 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/042ed63b-a1a9-4072-ae87-71b9fb98280c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.109084 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-serving-cert\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.110974 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0373f9a1-1537-4f29-905a-b0fb2affc113-etcd-client\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.112484 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.115552 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xxrkx"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.133383 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.137420 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.138659 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.141321 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.143131 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l6p59"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.143254 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.143744 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2l7mq"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.145718 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.147060 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.148328 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.149148 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sfs65"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.150814 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.153153 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.153311 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.163105 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k7wnt"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.166190 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.167757 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.170925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.172669 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.175266 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.179934 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.181734 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.182978 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.183976 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k7wnt"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.185638 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.186285 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jh9j6"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.187169 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g6x2r"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.189477 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.202217 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mnsdp"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.203421 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.204827 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-szhwx"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.206759 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pm4xm"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.206982 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.208424 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.209221 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-szhwx"] Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.209290 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.216729 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.231424 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.250324 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.270262 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.310370 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.330240 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.349562 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.370313 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.410249 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.429474 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.450344 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.469739 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.490492 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.511291 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.530249 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.550413 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.570162 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.590785 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.611551 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.632006 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.651066 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.671024 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.691287 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.716749 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.731762 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.752203 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.770329 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.790952 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.811324 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.831054 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.850214 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.870843 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.890657 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.910977 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.929421 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.951195 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.971602 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 08:11:51 crc kubenswrapper[4870]: I0130 08:11:51.991362 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.011940 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.032142 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.049061 4870 request.go:700] Waited for 1.018763456s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.051806 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.071412 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.091108 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.109454 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.139943 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.150747 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.172033 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.190638 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.211149 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.230963 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.251410 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.271185 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.290057 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.310516 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.331344 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.350558 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.371152 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.390519 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.410044 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.431692 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.451046 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.470660 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.491357 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.519670 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.530824 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.551116 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.570701 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.591262 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.610905 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.656353 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x8bd\" (UniqueName: \"kubernetes.io/projected/b2cd7eb7-87cb-44dc-a01f-17985460c12c-kube-api-access-8x8bd\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lcwc\" (UID: \"b2cd7eb7-87cb-44dc-a01f-17985460c12c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.672128 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgnb8\" (UniqueName: \"kubernetes.io/projected/b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa-kube-api-access-rgnb8\") pod \"console-operator-58897d9998-cdzxd\" (UID: \"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa\") " pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.690455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/739bcba5-d8ef-45fe-abf9-02d74d0d093c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7t7rr\" (UID: \"739bcba5-d8ef-45fe-abf9-02d74d0d093c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.699772 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.713492 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l62vw\" (UniqueName: \"kubernetes.io/projected/b41bf206-4b95-49db-85b6-2e5fe6dcc5ef-kube-api-access-l62vw\") pod \"kube-storage-version-migrator-operator-b67b599dd-mtt79\" (UID: \"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.735426 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpw6w\" (UniqueName: \"kubernetes.io/projected/042ed63b-a1a9-4072-ae87-71b9fb98280c-kube-api-access-lpw6w\") pod \"machine-api-operator-5694c8668f-jr94b\" (UID: \"042ed63b-a1a9-4072-ae87-71b9fb98280c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.761009 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fq7\" (UniqueName: \"kubernetes.io/projected/0373f9a1-1537-4f29-905a-b0fb2affc113-kube-api-access-66fq7\") pod \"apiserver-76f77b778f-2g2tj\" (UID: \"0373f9a1-1537-4f29-905a-b0fb2affc113\") " pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.762517 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.781120 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtnn7\" (UniqueName: \"kubernetes.io/projected/41ae1460-1e39-4d11-9357-3e0111521a8e-kube-api-access-xtnn7\") pod \"dns-operator-744455d44c-s6768\" (UID: \"41ae1460-1e39-4d11-9357-3e0111521a8e\") " pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.791920 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.802022 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pjr\" (UniqueName: \"kubernetes.io/projected/15eddd48-9a41-41cb-a284-80d01c7f8aad-kube-api-access-25pjr\") pod \"openshift-config-operator-7777fb866f-8nfd4\" (UID: \"15eddd48-9a41-41cb-a284-80d01c7f8aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.812601 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.851738 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.855253 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.872186 4870 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.891717 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.912451 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.933702 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.936820 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.945845 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.951479 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.964219 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.964535 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.972316 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.987464 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" Jan 30 08:11:52 crc kubenswrapper[4870]: I0130 08:11:52.990785 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.007763 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.010343 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.029125 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739bcba5_d8ef_45fe_abf9_02d74d0d093c.slice/crio-d78272f33e8d184722965f0ee1a97e79bfa07cf5540dee502f1ae7804eb4a5df WatchSource:0}: Error finding container d78272f33e8d184722965f0ee1a97e79bfa07cf5540dee502f1ae7804eb4a5df: Status 404 returned error can't find the container with id d78272f33e8d184722965f0ee1a97e79bfa07cf5540dee502f1ae7804eb4a5df Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.032664 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.033889 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.049103 4870 request.go:700] Waited for 1.756211651s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/persistentvolumes/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.054653 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s6768" Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.068036 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb41bf206_4b95_49db_85b6_2e5fe6dcc5ef.slice/crio-c81f54e94b920e6c7cfd46f69dde3b3574a636bf82ebf62b99dce8708d66585b WatchSource:0}: Error finding container c81f54e94b920e6c7cfd46f69dde3b3574a636bf82ebf62b99dce8708d66585b: Status 404 returned error can't find the container with id c81f54e94b920e6c7cfd46f69dde3b3574a636bf82ebf62b99dce8708d66585b Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.076403 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" event={"ID":"739bcba5-d8ef-45fe-abf9-02d74d0d093c","Type":"ContainerStarted","Data":"d78272f33e8d184722965f0ee1a97e79bfa07cf5540dee502f1ae7804eb4a5df"} Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095024 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-oauth-serving-cert\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095110 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-config\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095130 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-tls\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095153 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a6cf-5413-4524-a86c-11fa4a19821f-serving-cert\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095314 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-service-ca\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095381 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-dir\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095419 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095504 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-metrics-tls\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095549 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dp8v\" (UniqueName: \"kubernetes.io/projected/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-kube-api-access-4dp8v\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-encryption-config\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095622 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-client-ca\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095647 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095677 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-policies\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095798 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c488e93c-573d-4d04-a272-699af1059a0e-serving-cert\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095825 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5fkn\" (UniqueName: \"kubernetes.io/projected/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-kube-api-access-c5fkn\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.095866 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbv4\" (UniqueName: \"kubernetes.io/projected/c488e93c-573d-4d04-a272-699af1059a0e-kube-api-access-frbv4\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096028 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096069 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6hw\" (UniqueName: \"kubernetes.io/projected/2aa49ce7-f902-408a-94f1-da14a661e813-kube-api-access-ct6hw\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096114 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-config\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096145 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096178 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-certificates\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096236 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-trusted-ca-bundle\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096267 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096292 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096323 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-trusted-ca\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096349 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-serving-cert\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a361e11a-9e2f-4abf-a8c1-783f328f13a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096412 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfbv\" (UniqueName: \"kubernetes.io/projected/78280554-7b5b-4ccf-a674-2664144e4f5a-kube-api-access-dmfbv\") pod \"downloads-7954f5f757-v7bvt\" (UID: \"78280554-7b5b-4ccf-a674-2664144e4f5a\") " pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/406fb8be-c783-4ef8-8aae-5430b0226d17-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096472 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096544 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096568 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096726 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa3ca31-2951-4f0d-84f0-0b19a32c9927-audit-dir\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096782 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwqk\" (UniqueName: \"kubernetes.io/projected/02c6a6cf-5413-4524-a86c-11fa4a19821f-kube-api-access-hbwqk\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096842 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzzz\" (UniqueName: \"kubernetes.io/projected/faa3ca31-2951-4f0d-84f0-0b19a32c9927-kube-api-access-nzzzz\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096868 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-oauth-config\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.096918 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-ca\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097015 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-serving-cert\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097047 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-serving-cert\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097070 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097113 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097134 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-bound-sa-token\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097150 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097171 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097210 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a361e11a-9e2f-4abf-a8c1-783f328f13a9-config\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097234 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tbk\" (UniqueName: \"kubernetes.io/projected/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-kube-api-access-57tbk\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097256 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097278 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a361e11a-9e2f-4abf-a8c1-783f328f13a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097295 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-client\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097324 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-trusted-ca\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097349 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097372 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-audit-policies\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097391 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-client-ca\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097410 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-etcd-client\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097432 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-config\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097473 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097504 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h2g\" (UniqueName: \"kubernetes.io/projected/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-kube-api-access-46h2g\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097525 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldpq\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-kube-api-access-jldpq\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097564 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.097582 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:53.597566135 +0000 UTC m=+152.293113244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097627 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-console-config\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097916 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097945 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.097977 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/406fb8be-c783-4ef8-8aae-5430b0226d17-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.098001 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx49z\" (UniqueName: \"kubernetes.io/projected/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-kube-api-access-nx49z\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.099239 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.099272 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-service-ca\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200111 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200348 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9wg\" (UniqueName: \"kubernetes.io/projected/a8a1f91a-b48e-442f-9ab6-d704b3927315-kube-api-access-nm9wg\") pod \"ingress-canary-szhwx\" (UID: \"a8a1f91a-b48e-442f-9ab6-d704b3927315\") " pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200390 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-ca\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200410 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-mountpoint-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200427 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzzz\" (UniqueName: \"kubernetes.io/projected/faa3ca31-2951-4f0d-84f0-0b19a32c9927-kube-api-access-nzzzz\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200446 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-oauth-config\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200463 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6dp\" (UniqueName: \"kubernetes.io/projected/a58a222f-98a0-46b4-9ea8-36a922f6a349-kube-api-access-kk6dp\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200479 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e64f35db-e72b-4d73-b501-7c2aff5cc609-images\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200496 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/539f32b7-3075-49f4-b9f6-e63ac1d76d61-proxy-tls\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200531 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200550 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200577 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a27619-258e-4bed-afb0-1706904c6f9d-config\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200597 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a361e11a-9e2f-4abf-a8c1-783f328f13a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200613 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200632 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4f7\" (UniqueName: \"kubernetes.io/projected/38a2c6cb-fd9d-42f6-8774-647c544bd0f9-kube-api-access-6j4f7\") pod \"multus-admission-controller-857f4d67dd-g6x2r\" (UID: \"38a2c6cb-fd9d-42f6-8774-647c544bd0f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200654 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/836ae3f6-06f5-4996-9f9c-cacfb63fe855-profile-collector-cert\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200673 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-audit-policies\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200689 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-client-ca\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200717 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-config\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200734 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-etcd-client\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200756 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200779 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200795 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46h2g\" (UniqueName: \"kubernetes.io/projected/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-kube-api-access-46h2g\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200814 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78a27619-258e-4bed-afb0-1706904c6f9d-serving-cert\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200829 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldpq\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-kube-api-access-jldpq\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200848 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200866 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200945 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58a222f-98a0-46b4-9ea8-36a922f6a349-config-volume\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.200998 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-service-ca\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201013 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8a1f91a-b48e-442f-9ab6-d704b3927315-cert\") pod \"ingress-canary-szhwx\" (UID: \"a8a1f91a-b48e-442f-9ab6-d704b3927315\") " pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201040 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xk7\" (UniqueName: \"kubernetes.io/projected/539f32b7-3075-49f4-b9f6-e63ac1d76d61-kube-api-access-p2xk7\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201060 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/53a09b74-1b42-4535-a853-0752b6d1f90a-srv-cert\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201101 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-default-certificate\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201117 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e64f35db-e72b-4d73-b501-7c2aff5cc609-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201133 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7053ea40-6d30-41d8-bcb1-8f55e95feb22-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-26vrg\" (UID: \"7053ea40-6d30-41d8-bcb1-8f55e95feb22\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201150 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsg9\" (UniqueName: \"kubernetes.io/projected/836ae3f6-06f5-4996-9f9c-cacfb63fe855-kube-api-access-crsg9\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201165 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-socket-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201192 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a6cf-5413-4524-a86c-11fa4a19821f-serving-cert\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201208 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201225 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58a222f-98a0-46b4-9ea8-36a922f6a349-metrics-tls\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201244 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt5z\" (UniqueName: \"kubernetes.io/projected/9624fd43-bfa5-42c8-bebd-95a89988847d-kube-api-access-wgt5z\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201260 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bznlq\" (UniqueName: \"kubernetes.io/projected/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-kube-api-access-bznlq\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201286 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-metrics-tls\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201304 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwtm\" (UniqueName: \"kubernetes.io/projected/8ede517d-773d-4f0b-8c0a-42ae13359f95-kube-api-access-qmwtm\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201319 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88dw5\" (UniqueName: \"kubernetes.io/projected/53a09b74-1b42-4535-a853-0752b6d1f90a-kube-api-access-88dw5\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201348 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38a2c6cb-fd9d-42f6-8774-647c544bd0f9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g6x2r\" (UID: \"38a2c6cb-fd9d-42f6-8774-647c544bd0f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201374 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-service-ca-bundle\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201391 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdz27\" (UniqueName: \"kubernetes.io/projected/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-kube-api-access-wdz27\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201406 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46d623aa-7e54-4c20-aed3-3f125395a073-apiservice-cert\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201420 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e64f35db-e72b-4d73-b501-7c2aff5cc609-proxy-tls\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201438 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lss9f\" (UniqueName: \"kubernetes.io/projected/78a27619-258e-4bed-afb0-1706904c6f9d-kube-api-access-lss9f\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201480 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-config\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201498 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9624fd43-bfa5-42c8-bebd-95a89988847d-auth-proxy-config\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201515 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-plugins-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201532 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-trusted-ca-bundle\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201549 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201565 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-serving-cert\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201586 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a361e11a-9e2f-4abf-a8c1-783f328f13a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201604 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfbv\" (UniqueName: \"kubernetes.io/projected/78280554-7b5b-4ccf-a674-2664144e4f5a-kube-api-access-dmfbv\") pod \"downloads-7954f5f757-v7bvt\" (UID: \"78280554-7b5b-4ccf-a674-2664144e4f5a\") " pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201621 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-registration-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a6267b2-1222-4c0b-a890-c146d83b583d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tsghw\" (UID: \"1a6267b2-1222-4c0b-a890-c146d83b583d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201662 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201679 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9624fd43-bfa5-42c8-bebd-95a89988847d-config\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/53a09b74-1b42-4535-a853-0752b6d1f90a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201715 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbwqk\" (UniqueName: \"kubernetes.io/projected/02c6a6cf-5413-4524-a86c-11fa4a19821f-kube-api-access-hbwqk\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201743 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wsh5\" (UniqueName: \"kubernetes.io/projected/6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72-kube-api-access-6wsh5\") pod \"control-plane-machine-set-operator-78cbb6b69f-vzzk7\" (UID: \"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201766 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvf4\" (UniqueName: \"kubernetes.io/projected/e64f35db-e72b-4d73-b501-7c2aff5cc609-kube-api-access-chvf4\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201784 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-secret-volume\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201799 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lc5\" (UniqueName: \"kubernetes.io/projected/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-kube-api-access-77lc5\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201828 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-serving-cert\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201845 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201862 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8x7q\" (UniqueName: \"kubernetes.io/projected/f6d9ba19-88ea-489c-9f03-918e8b225e3b-kube-api-access-n8x7q\") pod \"migrator-59844c95c7-ps5nw\" (UID: \"f6d9ba19-88ea-489c-9f03-918e8b225e3b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201894 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-serving-cert\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201911 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201927 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-bound-sa-token\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201943 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/836ae3f6-06f5-4996-9f9c-cacfb63fe855-srv-cert\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201960 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a361e11a-9e2f-4abf-a8c1-783f328f13a9-config\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201976 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57tbk\" (UniqueName: \"kubernetes.io/projected/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-kube-api-access-57tbk\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.201995 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e740ffac-368d-45d5-89a8-25d370581945-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202023 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202344 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-trusted-ca\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202421 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-client\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vzzk7\" (UID: \"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202789 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-csi-data-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202896 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/853280ad-9d5a-4fe9-852f-c0596e70dc49-node-bootstrap-token\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202941 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-signing-key\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.202988 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-service-ca-bundle\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.203014 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e740ffac-368d-45d5-89a8-25d370581945-config\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.203780 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-serving-cert\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.203826 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-config\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.204000 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-console-config\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.204486 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.204521 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/406fb8be-c783-4ef8-8aae-5430b0226d17-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.204558 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx49z\" (UniqueName: \"kubernetes.io/projected/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-kube-api-access-nx49z\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.204583 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205250 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e740ffac-368d-45d5-89a8-25d370581945-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205296 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205317 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kll8p\" (UniqueName: \"kubernetes.io/projected/7053ea40-6d30-41d8-bcb1-8f55e95feb22-kube-api-access-kll8p\") pod \"package-server-manager-789f6589d5-26vrg\" (UID: \"7053ea40-6d30-41d8-bcb1-8f55e95feb22\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205340 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qbp\" (UniqueName: \"kubernetes.io/projected/1a6267b2-1222-4c0b-a890-c146d83b583d-kube-api-access-22qbp\") pod \"cluster-samples-operator-665b6dd947-tsghw\" (UID: \"1a6267b2-1222-4c0b-a890-c146d83b583d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205526 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv5rm\" (UniqueName: \"kubernetes.io/projected/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-kube-api-access-qv5rm\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205566 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-oauth-serving-cert\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205734 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-config\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205952 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-tls\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.205981 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-service-ca\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-dir\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206025 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/853280ad-9d5a-4fe9-852f-c0596e70dc49-certs\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206258 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dp8v\" (UniqueName: \"kubernetes.io/projected/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-kube-api-access-4dp8v\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206287 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206307 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46d623aa-7e54-4c20-aed3-3f125395a073-webhook-cert\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206491 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-encryption-config\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206520 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-client-ca\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206540 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l822t\" (UniqueName: \"kubernetes.io/projected/46d623aa-7e54-4c20-aed3-3f125395a073-kube-api-access-l822t\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206706 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-policies\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206735 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c488e93c-573d-4d04-a272-699af1059a0e-serving-cert\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5fkn\" (UniqueName: \"kubernetes.io/projected/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-kube-api-access-c5fkn\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.206779 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbv4\" (UniqueName: \"kubernetes.io/projected/c488e93c-573d-4d04-a272-699af1059a0e-kube-api-access-frbv4\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207027 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-signing-cabundle\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct6hw\" (UniqueName: \"kubernetes.io/projected/2aa49ce7-f902-408a-94f1-da14a661e813-kube-api-access-ct6hw\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207074 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-metrics-certs\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207097 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-certificates\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207240 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207286 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207316 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/46d623aa-7e54-4c20-aed3-3f125395a073-tmpfs\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.207343 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.221094 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-ca\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.221817 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/406fb8be-c783-4ef8-8aae-5430b0226d17-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.221922 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-trusted-ca\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.221994 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5dzd\" (UniqueName: \"kubernetes.io/projected/853280ad-9d5a-4fe9-852f-c0596e70dc49-kube-api-access-n5dzd\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222061 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qp2d\" (UniqueName: \"kubernetes.io/projected/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-kube-api-access-9qp2d\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222134 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222187 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222242 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-stats-auth\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa3ca31-2951-4f0d-84f0-0b19a32c9927-audit-dir\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222436 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-config-volume\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222508 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9624fd43-bfa5-42c8-bebd-95a89988847d-machine-approver-tls\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222551 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/539f32b7-3075-49f4-b9f6-e63ac1d76d61-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.222481 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-dir\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.223436 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a361e11a-9e2f-4abf-a8c1-783f328f13a9-config\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.224987 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.225486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.226245 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.227005 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.227633 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.227957 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-metrics-tls\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.228242 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-trusted-ca\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.231077 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.231254 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.231302 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.231767 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-serving-cert\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.231827 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.232396 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/faa3ca31-2951-4f0d-84f0-0b19a32c9927-audit-policies\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.236092 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.236484 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-config\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.236686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-config\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.237597 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-client-ca\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.237757 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.237754 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-console-config\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.238583 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-policies\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.239287 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.239469 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-service-ca\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.240019 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.240374 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-client\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.240370 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-trusted-ca\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.240675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.241316 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-config\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.241340 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-trusted-ca-bundle\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.241583 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/406fb8be-c783-4ef8-8aae-5430b0226d17-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.242935 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-certificates\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.243531 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-oauth-config\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.243962 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-client-ca\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.245532 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-encryption-config\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.246582 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/02c6a6cf-5413-4524-a86c-11fa4a19821f-etcd-service-ca\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.248128 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.248218 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa3ca31-2951-4f0d-84f0-0b19a32c9927-audit-dir\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.248318 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:53.748289847 +0000 UTC m=+152.443836956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.249163 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.249333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-oauth-serving-cert\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.250474 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-tls\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.251047 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.252652 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.253322 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c488e93c-573d-4d04-a272-699af1059a0e-serving-cert\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.253602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-serving-cert\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.259465 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a6cf-5413-4524-a86c-11fa4a19821f-serving-cert\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.259646 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/406fb8be-c783-4ef8-8aae-5430b0226d17-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.268555 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-etcd-client\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.268963 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faa3ca31-2951-4f0d-84f0-0b19a32c9927-serving-cert\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.270727 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzzz\" (UniqueName: \"kubernetes.io/projected/faa3ca31-2951-4f0d-84f0-0b19a32c9927-kube-api-access-nzzzz\") pod \"apiserver-7bbb656c7d-brrxb\" (UID: \"faa3ca31-2951-4f0d-84f0-0b19a32c9927\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.272619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a361e11a-9e2f-4abf-a8c1-783f328f13a9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.294511 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldpq\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-kube-api-access-jldpq\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.307376 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5fkn\" (UniqueName: \"kubernetes.io/projected/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-kube-api-access-c5fkn\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.312981 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325074 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325448 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a27619-258e-4bed-afb0-1706904c6f9d-config\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325485 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4f7\" (UniqueName: \"kubernetes.io/projected/38a2c6cb-fd9d-42f6-8774-647c544bd0f9-kube-api-access-6j4f7\") pod \"multus-admission-controller-857f4d67dd-g6x2r\" (UID: \"38a2c6cb-fd9d-42f6-8774-647c544bd0f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/836ae3f6-06f5-4996-9f9c-cacfb63fe855-profile-collector-cert\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325525 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-config\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325551 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78a27619-258e-4bed-afb0-1706904c6f9d-serving-cert\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325578 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58a222f-98a0-46b4-9ea8-36a922f6a349-config-volume\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325599 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8a1f91a-b48e-442f-9ab6-d704b3927315-cert\") pod \"ingress-canary-szhwx\" (UID: \"a8a1f91a-b48e-442f-9ab6-d704b3927315\") " pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325619 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xk7\" (UniqueName: \"kubernetes.io/projected/539f32b7-3075-49f4-b9f6-e63ac1d76d61-kube-api-access-p2xk7\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325642 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7053ea40-6d30-41d8-bcb1-8f55e95feb22-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-26vrg\" (UID: \"7053ea40-6d30-41d8-bcb1-8f55e95feb22\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325801 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/53a09b74-1b42-4535-a853-0752b6d1f90a-srv-cert\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.325832 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-default-certificate\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.326418 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a27619-258e-4bed-afb0-1706904c6f9d-config\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.326560 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:53.826535303 +0000 UTC m=+152.522082412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.326812 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58a222f-98a0-46b4-9ea8-36a922f6a349-config-volume\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.326904 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e64f35db-e72b-4d73-b501-7c2aff5cc609-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.326927 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt5z\" (UniqueName: \"kubernetes.io/projected/9624fd43-bfa5-42c8-bebd-95a89988847d-kube-api-access-wgt5z\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.327071 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-config\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.327441 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e64f35db-e72b-4d73-b501-7c2aff5cc609-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.327504 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsg9\" (UniqueName: \"kubernetes.io/projected/836ae3f6-06f5-4996-9f9c-cacfb63fe855-kube-api-access-crsg9\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.327530 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-socket-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.327845 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58a222f-98a0-46b4-9ea8-36a922f6a349-metrics-tls\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.328542 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-socket-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.327868 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bznlq\" (UniqueName: \"kubernetes.io/projected/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-kube-api-access-bznlq\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.328981 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwtm\" (UniqueName: \"kubernetes.io/projected/8ede517d-773d-4f0b-8c0a-42ae13359f95-kube-api-access-qmwtm\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329006 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88dw5\" (UniqueName: \"kubernetes.io/projected/53a09b74-1b42-4535-a853-0752b6d1f90a-kube-api-access-88dw5\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38a2c6cb-fd9d-42f6-8774-647c544bd0f9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g6x2r\" (UID: \"38a2c6cb-fd9d-42f6-8774-647c544bd0f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-service-ca-bundle\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329072 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdz27\" (UniqueName: \"kubernetes.io/projected/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-kube-api-access-wdz27\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329094 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46d623aa-7e54-4c20-aed3-3f125395a073-apiservice-cert\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329735 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/53a09b74-1b42-4535-a853-0752b6d1f90a-srv-cert\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329802 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e64f35db-e72b-4d73-b501-7c2aff5cc609-proxy-tls\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329833 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lss9f\" (UniqueName: \"kubernetes.io/projected/78a27619-258e-4bed-afb0-1706904c6f9d-kube-api-access-lss9f\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9624fd43-bfa5-42c8-bebd-95a89988847d-auth-proxy-config\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329904 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-plugins-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.329926 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a6267b2-1222-4c0b-a890-c146d83b583d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tsghw\" (UID: \"1a6267b2-1222-4c0b-a890-c146d83b583d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-registration-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330034 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9624fd43-bfa5-42c8-bebd-95a89988847d-config\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/53a09b74-1b42-4535-a853-0752b6d1f90a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330088 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wsh5\" (UniqueName: \"kubernetes.io/projected/6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72-kube-api-access-6wsh5\") pod \"control-plane-machine-set-operator-78cbb6b69f-vzzk7\" (UID: \"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330107 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvf4\" (UniqueName: \"kubernetes.io/projected/e64f35db-e72b-4d73-b501-7c2aff5cc609-kube-api-access-chvf4\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330126 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-secret-volume\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330145 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lc5\" (UniqueName: \"kubernetes.io/projected/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-kube-api-access-77lc5\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330183 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330202 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8x7q\" (UniqueName: \"kubernetes.io/projected/f6d9ba19-88ea-489c-9f03-918e8b225e3b-kube-api-access-n8x7q\") pod \"migrator-59844c95c7-ps5nw\" (UID: \"f6d9ba19-88ea-489c-9f03-918e8b225e3b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330228 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/836ae3f6-06f5-4996-9f9c-cacfb63fe855-srv-cert\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330246 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e740ffac-368d-45d5-89a8-25d370581945-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330276 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vzzk7\" (UID: \"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330296 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-csi-data-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330315 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/853280ad-9d5a-4fe9-852f-c0596e70dc49-node-bootstrap-token\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330332 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-signing-key\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330363 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-service-ca-bundle\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330380 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e740ffac-368d-45d5-89a8-25d370581945-config\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330397 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-serving-cert\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330429 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e740ffac-368d-45d5-89a8-25d370581945-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330449 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330472 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kll8p\" (UniqueName: \"kubernetes.io/projected/7053ea40-6d30-41d8-bcb1-8f55e95feb22-kube-api-access-kll8p\") pod \"package-server-manager-789f6589d5-26vrg\" (UID: \"7053ea40-6d30-41d8-bcb1-8f55e95feb22\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330488 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qbp\" (UniqueName: \"kubernetes.io/projected/1a6267b2-1222-4c0b-a890-c146d83b583d-kube-api-access-22qbp\") pod \"cluster-samples-operator-665b6dd947-tsghw\" (UID: \"1a6267b2-1222-4c0b-a890-c146d83b583d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330505 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv5rm\" (UniqueName: \"kubernetes.io/projected/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-kube-api-access-qv5rm\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330527 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/853280ad-9d5a-4fe9-852f-c0596e70dc49-certs\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330551 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46d623aa-7e54-4c20-aed3-3f125395a073-webhook-cert\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330569 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l822t\" (UniqueName: \"kubernetes.io/projected/46d623aa-7e54-4c20-aed3-3f125395a073-kube-api-access-l822t\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330600 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-signing-cabundle\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330644 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-metrics-certs\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330668 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/46d623aa-7e54-4c20-aed3-3f125395a073-tmpfs\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330701 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5dzd\" (UniqueName: \"kubernetes.io/projected/853280ad-9d5a-4fe9-852f-c0596e70dc49-kube-api-access-n5dzd\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330700 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-service-ca-bundle\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330718 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qp2d\" (UniqueName: \"kubernetes.io/projected/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-kube-api-access-9qp2d\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330739 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-stats-auth\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-config-volume\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330776 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9624fd43-bfa5-42c8-bebd-95a89988847d-machine-approver-tls\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330797 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/539f32b7-3075-49f4-b9f6-e63ac1d76d61-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330817 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-csi-data-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9wg\" (UniqueName: \"kubernetes.io/projected/a8a1f91a-b48e-442f-9ab6-d704b3927315-kube-api-access-nm9wg\") pod \"ingress-canary-szhwx\" (UID: \"a8a1f91a-b48e-442f-9ab6-d704b3927315\") " pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330865 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-mountpoint-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330897 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6dp\" (UniqueName: \"kubernetes.io/projected/a58a222f-98a0-46b4-9ea8-36a922f6a349-kube-api-access-kk6dp\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330917 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e64f35db-e72b-4d73-b501-7c2aff5cc609-images\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.330946 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/539f32b7-3075-49f4-b9f6-e63ac1d76d61-proxy-tls\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.332715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8a1f91a-b48e-442f-9ab6-d704b3927315-cert\") pod \"ingress-canary-szhwx\" (UID: \"a8a1f91a-b48e-442f-9ab6-d704b3927315\") " pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.336014 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/836ae3f6-06f5-4996-9f9c-cacfb63fe855-profile-collector-cert\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.336400 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38a2c6cb-fd9d-42f6-8774-647c544bd0f9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g6x2r\" (UID: \"38a2c6cb-fd9d-42f6-8774-647c544bd0f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.336822 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58a222f-98a0-46b4-9ea8-36a922f6a349-metrics-tls\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.337158 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/539f32b7-3075-49f4-b9f6-e63ac1d76d61-proxy-tls\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.337411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78a27619-258e-4bed-afb0-1706904c6f9d-serving-cert\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.337523 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-registration-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.338578 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.338734 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9624fd43-bfa5-42c8-bebd-95a89988847d-config\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.338759 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e64f35db-e72b-4d73-b501-7c2aff5cc609-proxy-tls\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.338979 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dp8v\" (UniqueName: \"kubernetes.io/projected/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-kube-api-access-4dp8v\") pod \"oauth-openshift-558db77b4-xxrkx\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.339574 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46d623aa-7e54-4c20-aed3-3f125395a073-apiservice-cert\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.339589 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-signing-cabundle\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.340353 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/853280ad-9d5a-4fe9-852f-c0596e70dc49-node-bootstrap-token\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.340606 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e740ffac-368d-45d5-89a8-25d370581945-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.340976 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a6267b2-1222-4c0b-a890-c146d83b583d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tsghw\" (UID: \"1a6267b2-1222-4c0b-a890-c146d83b583d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.340992 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-config-volume\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.341463 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-service-ca-bundle\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.341566 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9624fd43-bfa5-42c8-bebd-95a89988847d-auth-proxy-config\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.341627 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-plugins-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.341868 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vzzk7\" (UID: \"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.342155 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-default-certificate\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.343014 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/46d623aa-7e54-4c20-aed3-3f125395a073-tmpfs\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.344025 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/853280ad-9d5a-4fe9-852f-c0596e70dc49-certs\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.344077 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e740ffac-368d-45d5-89a8-25d370581945-config\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.344583 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-mountpoint-dir\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.345397 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.346213 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-signing-key\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.346803 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7053ea40-6d30-41d8-bcb1-8f55e95feb22-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-26vrg\" (UID: \"7053ea40-6d30-41d8-bcb1-8f55e95feb22\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.346980 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46d623aa-7e54-4c20-aed3-3f125395a073-webhook-cert\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.347117 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-metrics-certs\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.350053 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e64f35db-e72b-4d73-b501-7c2aff5cc609-images\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.350411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/836ae3f6-06f5-4996-9f9c-cacfb63fe855-srv-cert\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.350600 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9624fd43-bfa5-42c8-bebd-95a89988847d-machine-approver-tls\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.351679 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-serving-cert\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.351849 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-secret-volume\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.351974 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-stats-auth\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.352846 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/539f32b7-3075-49f4-b9f6-e63ac1d76d61-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.353946 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/53a09b74-1b42-4535-a853-0752b6d1f90a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.356239 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.362155 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jr94b"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.364306 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbv4\" (UniqueName: \"kubernetes.io/projected/c488e93c-573d-4d04-a272-699af1059a0e-kube-api-access-frbv4\") pod \"route-controller-manager-6576b87f9c-8p957\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.368407 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57tbk\" (UniqueName: \"kubernetes.io/projected/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-kube-api-access-57tbk\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.372090 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod042ed63b_a1a9_4072_ae87_71b9fb98280c.slice/crio-32bfae6fb5cd0844ffae17d236efc6733c35b12c04679da68fc6b71a72a5cb35 WatchSource:0}: Error finding container 32bfae6fb5cd0844ffae17d236efc6733c35b12c04679da68fc6b71a72a5cb35: Status 404 returned error can't find the container with id 32bfae6fb5cd0844ffae17d236efc6733c35b12c04679da68fc6b71a72a5cb35 Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.382760 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.389910 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-bound-sa-token\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.394655 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s6768"] Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.404444 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41ae1460_1e39_4d11_9357_3e0111521a8e.slice/crio-b2e7a12e8e0a754fdea117b1a6d70f90fbef4cf713402e604223c579e30d53dd WatchSource:0}: Error finding container b2e7a12e8e0a754fdea117b1a6d70f90fbef4cf713402e604223c579e30d53dd: Status 404 returned error can't find the container with id b2e7a12e8e0a754fdea117b1a6d70f90fbef4cf713402e604223c579e30d53dd Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.417190 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2bdbb40-bece-420f-9ff7-bdeff90c8bd2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kvwdd\" (UID: \"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.431504 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.432475 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:53.932411084 +0000 UTC m=+152.627958193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.434333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46h2g\" (UniqueName: \"kubernetes.io/projected/2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715-kube-api-access-46h2g\") pod \"openshift-apiserver-operator-796bbdcf4f-vjvf5\" (UID: \"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.455997 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.467350 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a361e11a-9e2f-4abf-a8c1-783f328f13a9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nw244\" (UID: \"a361e11a-9e2f-4abf-a8c1-783f328f13a9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.468042 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx49z\" (UniqueName: \"kubernetes.io/projected/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-kube-api-access-nx49z\") pod \"controller-manager-879f6c89f-kkf4z\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.477309 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.488072 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbwqk\" (UniqueName: \"kubernetes.io/projected/02c6a6cf-5413-4524-a86c-11fa4a19821f-kube-api-access-hbwqk\") pod \"etcd-operator-b45778765-ssxgx\" (UID: \"02c6a6cf-5413-4524-a86c-11fa4a19821f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.492952 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.500313 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2g2tj"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.507823 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct6hw\" (UniqueName: \"kubernetes.io/projected/2aa49ce7-f902-408a-94f1-da14a661e813-kube-api-access-ct6hw\") pod \"console-f9d7485db-2mj87\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.533542 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.534113 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.034097591 +0000 UTC m=+152.729644700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.534322 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0373f9a1_1537_4f29_905a_b0fb2affc113.slice/crio-c692b0d8c3c6b489e0a3be41a0f61c863ee7572f9ff504585ba318691ad55a3a WatchSource:0}: Error finding container c692b0d8c3c6b489e0a3be41a0f61c863ee7572f9ff504585ba318691ad55a3a: Status 404 returned error can't find the container with id c692b0d8c3c6b489e0a3be41a0f61c863ee7572f9ff504585ba318691ad55a3a Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.546015 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.547024 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a55dd8c8-fb6d-450b-a80d-35e7223d2cff-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67xxt\" (UID: \"a55dd8c8-fb6d-450b-a80d-35e7223d2cff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.549374 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.549811 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfbv\" (UniqueName: \"kubernetes.io/projected/78280554-7b5b-4ccf-a674-2664144e4f5a-kube-api-access-dmfbv\") pod \"downloads-7954f5f757-v7bvt\" (UID: \"78280554-7b5b-4ccf-a674-2664144e4f5a\") " pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.554677 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cdzxd"] Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.558031 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15eddd48_9a41_41cb_a284_80d01c7f8aad.slice/crio-5f268d6983af1b7b879bee78cbfed53fb30971cb70a4d5ffa4f6c3907b23bc91 WatchSource:0}: Error finding container 5f268d6983af1b7b879bee78cbfed53fb30971cb70a4d5ffa4f6c3907b23bc91: Status 404 returned error can't find the container with id 5f268d6983af1b7b879bee78cbfed53fb30971cb70a4d5ffa4f6c3907b23bc91 Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.569737 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaa3ca31_2951_4f0d_84f0_0b19a32c9927.slice/crio-914a412c6899858cfe6663eb8b2abe03171d9143153a4ffb21f681ec056f1903 WatchSource:0}: Error finding container 914a412c6899858cfe6663eb8b2abe03171d9143153a4ffb21f681ec056f1903: Status 404 returned error can't find the container with id 914a412c6899858cfe6663eb8b2abe03171d9143153a4ffb21f681ec056f1903 Jan 30 08:11:53 crc kubenswrapper[4870]: W0130 08:11:53.586934 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9be3c7b_3bd5_48ba_bd5e_affe9a29d8aa.slice/crio-d1e9dd4ef3e489bf0e9ad4838e5425badd078d9cbf455d47b5eb1daeb406663f WatchSource:0}: Error finding container d1e9dd4ef3e489bf0e9ad4838e5425badd078d9cbf455d47b5eb1daeb406663f: Status 404 returned error can't find the container with id d1e9dd4ef3e489bf0e9ad4838e5425badd078d9cbf455d47b5eb1daeb406663f Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.590201 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xk7\" (UniqueName: \"kubernetes.io/projected/539f32b7-3075-49f4-b9f6-e63ac1d76d61-kube-api-access-p2xk7\") pod \"machine-config-controller-84d6567774-zghdb\" (UID: \"539f32b7-3075-49f4-b9f6-e63ac1d76d61\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.609550 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.615089 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4f7\" (UniqueName: \"kubernetes.io/projected/38a2c6cb-fd9d-42f6-8774-647c544bd0f9-kube-api-access-6j4f7\") pod \"multus-admission-controller-857f4d67dd-g6x2r\" (UID: \"38a2c6cb-fd9d-42f6-8774-647c544bd0f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.621173 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.626312 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.628092 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt5z\" (UniqueName: \"kubernetes.io/projected/9624fd43-bfa5-42c8-bebd-95a89988847d-kube-api-access-wgt5z\") pod \"machine-approver-56656f9798-vn7wx\" (UID: \"9624fd43-bfa5-42c8-bebd-95a89988847d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.634969 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.635413 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.135109238 +0000 UTC m=+152.830656347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.635981 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.637004 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.136968263 +0000 UTC m=+152.832515372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.644462 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.646731 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsg9\" (UniqueName: \"kubernetes.io/projected/836ae3f6-06f5-4996-9f9c-cacfb63fe855-kube-api-access-crsg9\") pod \"catalog-operator-68c6474976-jkpz9\" (UID: \"836ae3f6-06f5-4996-9f9c-cacfb63fe855\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.665216 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bznlq\" (UniqueName: \"kubernetes.io/projected/4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e-kube-api-access-bznlq\") pod \"authentication-operator-69f744f599-l6p59\" (UID: \"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.668178 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.677275 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.685936 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.687556 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.691169 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdz27\" (UniqueName: \"kubernetes.io/projected/fe4278ad-53ec-4f7f-9c39-a00b6fa505c5-kube-api-access-wdz27\") pod \"router-default-5444994796-dfwzs\" (UID: \"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5\") " pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.701530 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.731617 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.733444 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wsh5\" (UniqueName: \"kubernetes.io/projected/6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72-kube-api-access-6wsh5\") pod \"control-plane-machine-set-operator-78cbb6b69f-vzzk7\" (UID: \"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.745485 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwtm\" (UniqueName: \"kubernetes.io/projected/8ede517d-773d-4f0b-8c0a-42ae13359f95-kube-api-access-qmwtm\") pod \"marketplace-operator-79b997595-jh9j6\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.746160 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.746652 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.246629475 +0000 UTC m=+152.942176584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.749572 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.755725 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88dw5\" (UniqueName: \"kubernetes.io/projected/53a09b74-1b42-4535-a853-0752b6d1f90a-kube-api-access-88dw5\") pod \"olm-operator-6b444d44fb-fn5hp\" (UID: \"53a09b74-1b42-4535-a853-0752b6d1f90a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.766690 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8x7q\" (UniqueName: \"kubernetes.io/projected/f6d9ba19-88ea-489c-9f03-918e8b225e3b-kube-api-access-n8x7q\") pod \"migrator-59844c95c7-ps5nw\" (UID: \"f6d9ba19-88ea-489c-9f03-918e8b225e3b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.783002 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.783163 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.783794 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvf4\" (UniqueName: \"kubernetes.io/projected/e64f35db-e72b-4d73-b501-7c2aff5cc609-kube-api-access-chvf4\") pod \"machine-config-operator-74547568cd-qnjxg\" (UID: \"e64f35db-e72b-4d73-b501-7c2aff5cc609\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.787377 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.795774 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.805537 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.808078 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kll8p\" (UniqueName: \"kubernetes.io/projected/7053ea40-6d30-41d8-bcb1-8f55e95feb22-kube-api-access-kll8p\") pod \"package-server-manager-789f6589d5-26vrg\" (UID: \"7053ea40-6d30-41d8-bcb1-8f55e95feb22\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.816308 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.828804 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lss9f\" (UniqueName: \"kubernetes.io/projected/78a27619-258e-4bed-afb0-1706904c6f9d-kube-api-access-lss9f\") pod \"service-ca-operator-777779d784-xvt4n\" (UID: \"78a27619-258e-4bed-afb0-1706904c6f9d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.829221 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.847767 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e740ffac-368d-45d5-89a8-25d370581945-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ll2mq\" (UID: \"e740ffac-368d-45d5-89a8-25d370581945\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.848638 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.848972 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.34895828 +0000 UTC m=+153.044505389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.871095 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lc5\" (UniqueName: \"kubernetes.io/projected/7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8-kube-api-access-77lc5\") pod \"service-ca-9c57cc56f-2l7mq\" (UID: \"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8\") " pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.887817 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5"] Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.892937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5dzd\" (UniqueName: \"kubernetes.io/projected/853280ad-9d5a-4fe9-852f-c0596e70dc49-kube-api-access-n5dzd\") pod \"machine-config-server-pm4xm\" (UID: \"853280ad-9d5a-4fe9-852f-c0596e70dc49\") " pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.913629 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qbp\" (UniqueName: \"kubernetes.io/projected/1a6267b2-1222-4c0b-a890-c146d83b583d-kube-api-access-22qbp\") pod \"cluster-samples-operator-665b6dd947-tsghw\" (UID: \"1a6267b2-1222-4c0b-a890-c146d83b583d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.932335 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv5rm\" (UniqueName: \"kubernetes.io/projected/20fcc16b-f2b2-4a33-a8b2-567bec77d7ca-kube-api-access-qv5rm\") pod \"csi-hostpathplugin-k7wnt\" (UID: \"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.952869 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:53 crc kubenswrapper[4870]: E0130 08:11:53.953212 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.453169312 +0000 UTC m=+153.148716591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.957183 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l822t\" (UniqueName: \"kubernetes.io/projected/46d623aa-7e54-4c20-aed3-3f125395a073-kube-api-access-l822t\") pod \"packageserver-d55dfcdfc-dhpbr\" (UID: \"46d623aa-7e54-4c20-aed3-3f125395a073\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.979283 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.981959 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qp2d\" (UniqueName: \"kubernetes.io/projected/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-kube-api-access-9qp2d\") pod \"collect-profiles-29496000-25p92\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:53 crc kubenswrapper[4870]: I0130 08:11:53.998925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6dp\" (UniqueName: \"kubernetes.io/projected/a58a222f-98a0-46b4-9ea8-36a922f6a349-kube-api-access-kk6dp\") pod \"dns-default-mnsdp\" (UID: \"a58a222f-98a0-46b4-9ea8-36a922f6a349\") " pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:54 crc kubenswrapper[4870]: W0130 08:11:54.002406 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9624fd43_bfa5_42c8_bebd_95a89988847d.slice/crio-b5b191882c8214cf7abd45877685493aaf17bab3ed72290121d2be06c0528f59 WatchSource:0}: Error finding container b5b191882c8214cf7abd45877685493aaf17bab3ed72290121d2be06c0528f59: Status 404 returned error can't find the container with id b5b191882c8214cf7abd45877685493aaf17bab3ed72290121d2be06c0528f59 Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.011747 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9wg\" (UniqueName: \"kubernetes.io/projected/a8a1f91a-b48e-442f-9ab6-d704b3927315-kube-api-access-nm9wg\") pod \"ingress-canary-szhwx\" (UID: \"a8a1f91a-b48e-442f-9ab6-d704b3927315\") " pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.023112 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.023676 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xxrkx"] Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.041795 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.056982 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.060019 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.060585 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.560555427 +0000 UTC m=+153.256102536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.063821 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.083150 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.122659 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.126933 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ssxgx"] Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.129548 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" event={"ID":"b2cd7eb7-87cb-44dc-a01f-17985460c12c","Type":"ContainerStarted","Data":"bcb05793641bd67c1acbeee4554363d30fd2eaaf8071b45c4e65086e61e263dd"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.129613 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" event={"ID":"b2cd7eb7-87cb-44dc-a01f-17985460c12c","Type":"ContainerStarted","Data":"30261041a241d2ac251a9bc33bdaec03f9eb7a434d15696d60f69b0dba1f5cbf"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.134434 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s6768" event={"ID":"41ae1460-1e39-4d11-9357-3e0111521a8e","Type":"ContainerStarted","Data":"1702f30a64f9b42998cb07d3131e3070d10837649d10efef360d13a4b1741400"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.134467 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s6768" event={"ID":"41ae1460-1e39-4d11-9357-3e0111521a8e","Type":"ContainerStarted","Data":"b2e7a12e8e0a754fdea117b1a6d70f90fbef4cf713402e604223c579e30d53dd"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.135964 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.143664 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.156396 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" event={"ID":"9624fd43-bfa5-42c8-bebd-95a89988847d","Type":"ContainerStarted","Data":"b5b191882c8214cf7abd45877685493aaf17bab3ed72290121d2be06c0528f59"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.159002 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" event={"ID":"d4876c72-6cd1-43e0-b44a-45c4bd69e91f","Type":"ContainerStarted","Data":"94d52f9687de877d5fd97b94963947e16acbe6d1f11849a8cb9317ae4e717ce7"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.161368 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.162111 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.6620843 +0000 UTC m=+153.357631409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.168445 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.170752 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" event={"ID":"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa","Type":"ContainerStarted","Data":"cb8b82bd4cee4a06ed3352e7c2149843621ae9f790652a0e9beb76836c9e0e3d"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.170791 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" event={"ID":"b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa","Type":"ContainerStarted","Data":"d1e9dd4ef3e489bf0e9ad4838e5425badd078d9cbf455d47b5eb1daeb406663f"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.172303 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.189672 4870 patch_prober.go:28] interesting pod/console-operator-58897d9998-cdzxd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.189722 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-szhwx" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.189751 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" podUID="b9be3c7b-3bd5-48ba-bd5e-affe9a29d8aa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.191559 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pm4xm" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.207671 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" event={"ID":"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef","Type":"ContainerStarted","Data":"cc9b28324b264a123ca85a62a983f8f0edd9c35267625c7a8f40322a0b38f98d"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.207711 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" event={"ID":"b41bf206-4b95-49db-85b6-2e5fe6dcc5ef","Type":"ContainerStarted","Data":"c81f54e94b920e6c7cfd46f69dde3b3574a636bf82ebf62b99dce8708d66585b"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.222768 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" event={"ID":"15eddd48-9a41-41cb-a284-80d01c7f8aad","Type":"ContainerStarted","Data":"5ef5c75822c22a3f709c47c243272f7efcd72e282e48476b6a5544787711f3a1"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.222806 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" event={"ID":"15eddd48-9a41-41cb-a284-80d01c7f8aad","Type":"ContainerStarted","Data":"5f268d6983af1b7b879bee78cbfed53fb30971cb70a4d5ffa4f6c3907b23bc91"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.255686 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" event={"ID":"042ed63b-a1a9-4072-ae87-71b9fb98280c","Type":"ContainerStarted","Data":"85dfa0294fdcdeed334d2765c73e93b189d3c341512f6440f8c0311b0059dda2"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.255758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" event={"ID":"042ed63b-a1a9-4072-ae87-71b9fb98280c","Type":"ContainerStarted","Data":"32bfae6fb5cd0844ffae17d236efc6733c35b12c04679da68fc6b71a72a5cb35"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.261748 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt"] Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.263446 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.263888 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.763857469 +0000 UTC m=+153.459404568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.263516 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" event={"ID":"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715","Type":"ContainerStarted","Data":"e27beb861beaaa06426e0756d71bc535d8643b5b6bf47be8356eab351436ab77"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.281292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" event={"ID":"739bcba5-d8ef-45fe-abf9-02d74d0d093c","Type":"ContainerStarted","Data":"048bfa0b79a0299901e375bbc07072944c6537030ca9a7cf23b20e0d3754683f"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.285337 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" event={"ID":"0373f9a1-1537-4f29-905a-b0fb2affc113","Type":"ContainerStarted","Data":"c692b0d8c3c6b489e0a3be41a0f61c863ee7572f9ff504585ba318691ad55a3a"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.294185 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" event={"ID":"c488e93c-573d-4d04-a272-699af1059a0e","Type":"ContainerStarted","Data":"704010f76326b14b45acb49d52a3c39fd09423589bc0b99052ca69b69f06912c"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.297095 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" event={"ID":"faa3ca31-2951-4f0d-84f0-0b19a32c9927","Type":"ContainerStarted","Data":"914a412c6899858cfe6663eb8b2abe03171d9143153a4ffb21f681ec056f1903"} Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.366078 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.366322 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.866294768 +0000 UTC m=+153.561841867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.366846 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.367475 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.867428361 +0000 UTC m=+153.562975480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: W0130 08:11:54.458725 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55dd8c8_fb6d_450b_a80d_35e7223d2cff.slice/crio-414251ee3c4819ded89c78bee12a2c46a661efd5efaa339b79e4a8f836d91776 WatchSource:0}: Error finding container 414251ee3c4819ded89c78bee12a2c46a661efd5efaa339b79e4a8f836d91776: Status 404 returned error can't find the container with id 414251ee3c4819ded89c78bee12a2c46a661efd5efaa339b79e4a8f836d91776 Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.473642 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.475965 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:54.975945459 +0000 UTC m=+153.671492568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.578026 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.578774 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.07874677 +0000 UTC m=+153.774293879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.632643 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" podStartSLOduration=127.632622957 podStartE2EDuration="2m7.632622957s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:54.631974728 +0000 UTC m=+153.327521837" watchObservedRunningTime="2026-01-30 08:11:54.632622957 +0000 UTC m=+153.328170066" Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.679293 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.679434 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.179410226 +0000 UTC m=+153.874957335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.679958 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.680307 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.180298692 +0000 UTC m=+153.875845801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.781714 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.782133 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.282113883 +0000 UTC m=+153.977660992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.885343 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.886156 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.386138119 +0000 UTC m=+154.081685218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.986548 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:54 crc kubenswrapper[4870]: E0130 08:11:54.986891 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.486862518 +0000 UTC m=+154.182409627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:54 crc kubenswrapper[4870]: I0130 08:11:54.987212 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7t7rr" podStartSLOduration=127.987180137 podStartE2EDuration="2m7.987180137s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:54.987109235 +0000 UTC m=+153.682656344" watchObservedRunningTime="2026-01-30 08:11:54.987180137 +0000 UTC m=+153.682727246" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.025946 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l6p59"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.044279 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.088389 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.088970 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.588951426 +0000 UTC m=+154.284498535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.187094 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mtt79" podStartSLOduration=128.187076528 podStartE2EDuration="2m8.187076528s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.142903246 +0000 UTC m=+153.838450355" watchObservedRunningTime="2026-01-30 08:11:55.187076528 +0000 UTC m=+153.882623637" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.190014 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.190207 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.69018137 +0000 UTC m=+154.385728489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.190293 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.190705 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.690688345 +0000 UTC m=+154.386235454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.222027 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lcwc" podStartSLOduration=128.222008338 podStartE2EDuration="2m8.222008338s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.220591036 +0000 UTC m=+153.916138135" watchObservedRunningTime="2026-01-30 08:11:55.222008338 +0000 UTC m=+153.917555447" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.252272 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.252348 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.294824 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.297185 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.797164793 +0000 UTC m=+154.492711902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.343334 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" event={"ID":"2dc32e9a-d384-4f5e-b1ea-c9ac2f57e715","Type":"ContainerStarted","Data":"6f70705e79c4782be5d21a500bb13d37cd9c32f5a8a48fc77f6a5dc65f34df8f"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.360056 4870 generic.go:334] "Generic (PLEG): container finished" podID="faa3ca31-2951-4f0d-84f0-0b19a32c9927" containerID="a613a00984bbb4472b7e0c5e6ab507edb340599de69eed4287f29d41e305eeac" exitCode=0 Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.360218 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" event={"ID":"faa3ca31-2951-4f0d-84f0-0b19a32c9927","Type":"ContainerDied","Data":"a613a00984bbb4472b7e0c5e6ab507edb340599de69eed4287f29d41e305eeac"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.365860 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dfwzs" event={"ID":"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5","Type":"ContainerStarted","Data":"a6c4d7c89dd0a82eb8beb0b07000eaf09866aad342a4bd9eb2c90351b9a5a6c0"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.366097 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dfwzs" event={"ID":"fe4278ad-53ec-4f7f-9c39-a00b6fa505c5","Type":"ContainerStarted","Data":"70cc36eaa4b6b941f775779319cd1468328261a8aede7a08f5368a578afe8f57"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.376509 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.382403 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" event={"ID":"a361e11a-9e2f-4abf-a8c1-783f328f13a9","Type":"ContainerStarted","Data":"6a1a289b7e1ebbebf9ca0e8efc09f4b2c0ebe493771e8e186e9f8562b8e03485"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.399565 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.399957 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:55.899944152 +0000 UTC m=+154.595491251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.414529 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.422753 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g6x2r"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.428079 4870 generic.go:334] "Generic (PLEG): container finished" podID="0373f9a1-1537-4f29-905a-b0fb2affc113" containerID="161645af44e76731c52c9d8a0f7698cb7ed4ac904bddb6e18b4e43f4d491d7c3" exitCode=0 Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.428307 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" event={"ID":"0373f9a1-1537-4f29-905a-b0fb2affc113","Type":"ContainerDied","Data":"161645af44e76731c52c9d8a0f7698cb7ed4ac904bddb6e18b4e43f4d491d7c3"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.428337 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" event={"ID":"0373f9a1-1537-4f29-905a-b0fb2affc113","Type":"ContainerStarted","Data":"493c238fb45d7f8e5b3102ed782b94c8080ba2b5d6e05d93d843643190c1f7a0"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.431102 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" event={"ID":"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e","Type":"ContainerStarted","Data":"4bc0e9346d43dfa548f826bccbd9a7d7b8966b1950a6ab5aeb4c0afa560b8b8a"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.431480 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2mj87"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.438698 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" event={"ID":"042ed63b-a1a9-4072-ae87-71b9fb98280c","Type":"ContainerStarted","Data":"0d306a2ea52be2b8748d5e188b1321971efa260e2a0ad0a3e62ebe050acf1c10"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.455313 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.461432 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pm4xm" event={"ID":"853280ad-9d5a-4fe9-852f-c0596e70dc49","Type":"ContainerStarted","Data":"d5b998d8cbbc41e213c1adaf2af84c5846cf77ac184bacad6b45dd5402381811"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.461477 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pm4xm" event={"ID":"853280ad-9d5a-4fe9-852f-c0596e70dc49","Type":"ContainerStarted","Data":"cd41dbdded3b7cd061795fa284f773eaad7c276bb3dd35ffbdab3935139c4dde"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.480292 4870 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xxrkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.480341 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" podUID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Jan 30 08:11:55 crc kubenswrapper[4870]: W0130 08:11:55.492552 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa49ce7_f902_408a_94f1_da14a661e813.slice/crio-11900425e10bfa9bf6c9c649d5dac8048b1ed7e104a45655b98935b712a80d21 WatchSource:0}: Error finding container 11900425e10bfa9bf6c9c649d5dac8048b1ed7e104a45655b98935b712a80d21: Status 404 returned error can't find the container with id 11900425e10bfa9bf6c9c649d5dac8048b1ed7e104a45655b98935b712a80d21 Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.500806 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.502083 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.002049962 +0000 UTC m=+154.697597071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.502382 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" event={"ID":"c488e93c-573d-4d04-a272-699af1059a0e","Type":"ContainerStarted","Data":"1c707abb6d42fe8dbbf92f521b8a55ebafd7443ac15de36d0828dd259789e664"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.504567 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.505185 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: W0130 08:11:55.505651 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a2c6cb_fd9d_42f6_8774_647c544bd0f9.slice/crio-33351ac2c30e695815d9f91b72215544b26c926007cc52ebb8d5ab63cdf14cf9 WatchSource:0}: Error finding container 33351ac2c30e695815d9f91b72215544b26c926007cc52ebb8d5ab63cdf14cf9: Status 404 returned error can't find the container with id 33351ac2c30e695815d9f91b72215544b26c926007cc52ebb8d5ab63cdf14cf9 Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.507518 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.007504192 +0000 UTC m=+154.703051301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.509493 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" event={"ID":"a55dd8c8-fb6d-450b-a80d-35e7223d2cff","Type":"ContainerStarted","Data":"a070c6ca6b9b4e2bad997b55632f9fbafd24283227d2d702a8aa91d7cb030972"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.509531 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" event={"ID":"a55dd8c8-fb6d-450b-a80d-35e7223d2cff","Type":"ContainerStarted","Data":"414251ee3c4819ded89c78bee12a2c46a661efd5efaa339b79e4a8f836d91776"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.513598 4870 generic.go:334] "Generic (PLEG): container finished" podID="15eddd48-9a41-41cb-a284-80d01c7f8aad" containerID="5ef5c75822c22a3f709c47c243272f7efcd72e282e48476b6a5544787711f3a1" exitCode=0 Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.513657 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" event={"ID":"15eddd48-9a41-41cb-a284-80d01c7f8aad","Type":"ContainerDied","Data":"5ef5c75822c22a3f709c47c243272f7efcd72e282e48476b6a5544787711f3a1"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.520829 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" event={"ID":"02c6a6cf-5413-4524-a86c-11fa4a19821f","Type":"ContainerStarted","Data":"306d3e8ee5b77c9fa36ff766942e7989e0f0754c0a0ad3d24ee62ac20f011b64"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.539492 4870 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8p957 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.539570 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" podUID="c488e93c-573d-4d04-a272-699af1059a0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.546790 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s6768" event={"ID":"41ae1460-1e39-4d11-9357-3e0111521a8e","Type":"ContainerStarted","Data":"5387a8bb1939e3c9fb3b3f3dcf0a447bd8edb22a09de62e7ff0c85039e608aa1"} Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.610169 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.611630 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.111611561 +0000 UTC m=+154.807158670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.712535 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.714422 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.214385079 +0000 UTC m=+154.909932188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.729194 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kkf4z"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.780440 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.782487 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.782534 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.784396 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jh9j6"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.784656 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jr94b" podStartSLOduration=128.78463885 podStartE2EDuration="2m8.78463885s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.779762867 +0000 UTC m=+154.475309976" watchObservedRunningTime="2026-01-30 08:11:55.78463885 +0000 UTC m=+154.480185959" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.797685 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.803025 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.818107 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.818917 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pm4xm" podStartSLOduration=5.81890729 podStartE2EDuration="5.81890729s" podCreationTimestamp="2026-01-30 08:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.814230342 +0000 UTC m=+154.509777461" watchObservedRunningTime="2026-01-30 08:11:55.81890729 +0000 UTC m=+154.514454399" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.821515 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.821861 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.321844077 +0000 UTC m=+155.017391186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.821944 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.822173 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.322165657 +0000 UTC m=+155.017712766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.837241 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v7bvt"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.866799 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vjvf5" podStartSLOduration=129.866780541 podStartE2EDuration="2m9.866780541s" podCreationTimestamp="2026-01-30 08:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.862868426 +0000 UTC m=+154.558415535" watchObservedRunningTime="2026-01-30 08:11:55.866780541 +0000 UTC m=+154.562327650" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.923784 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.923977 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.423948596 +0000 UTC m=+155.119495705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.926359 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:55 crc kubenswrapper[4870]: E0130 08:11:55.928210 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.428193401 +0000 UTC m=+155.123740510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.962904 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" podStartSLOduration=128.962884613 podStartE2EDuration="2m8.962884613s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.902205485 +0000 UTC m=+154.597752594" watchObservedRunningTime="2026-01-30 08:11:55.962884613 +0000 UTC m=+154.658431722" Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.971351 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.976839 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k7wnt"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.982822 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92"] Jan 30 08:11:55 crc kubenswrapper[4870]: I0130 08:11:55.990864 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-s6768" podStartSLOduration=128.990850577 podStartE2EDuration="2m8.990850577s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:55.989583021 +0000 UTC m=+154.685130130" watchObservedRunningTime="2026-01-30 08:11:55.990850577 +0000 UTC m=+154.686397686" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.027636 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.028383 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.528369194 +0000 UTC m=+155.223916303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.032546 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67xxt" podStartSLOduration=129.032528006 podStartE2EDuration="2m9.032528006s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.030492356 +0000 UTC m=+154.726039465" watchObservedRunningTime="2026-01-30 08:11:56.032528006 +0000 UTC m=+154.728075125" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.043025 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.053968 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mnsdp"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.068930 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.105387 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" podStartSLOduration=130.105356642 podStartE2EDuration="2m10.105356642s" podCreationTimestamp="2026-01-30 08:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.098794479 +0000 UTC m=+154.794341588" watchObservedRunningTime="2026-01-30 08:11:56.105356642 +0000 UTC m=+154.800903751" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.114207 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.129702 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dfwzs" podStartSLOduration=129.1296825 podStartE2EDuration="2m9.1296825s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.127731031 +0000 UTC m=+154.823278140" watchObservedRunningTime="2026-01-30 08:11:56.1296825 +0000 UTC m=+154.825229609" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.130684 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.131129 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.631107821 +0000 UTC m=+155.326654930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: W0130 08:11:56.133268 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20fcc16b_f2b2_4a33_a8b2_567bec77d7ca.slice/crio-e250f73a121c440b98359ca6a9970a763e46ca651a33f5e4c1db99fb7006c06b WatchSource:0}: Error finding container e250f73a121c440b98359ca6a9970a763e46ca651a33f5e4c1db99fb7006c06b: Status 404 returned error can't find the container with id e250f73a121c440b98359ca6a9970a763e46ca651a33f5e4c1db99fb7006c06b Jan 30 08:11:56 crc kubenswrapper[4870]: W0130 08:11:56.133720 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93fd6b37_eee2_4fd5_aa18_51eecea65a3b.slice/crio-1d4f6287093b36cb2637cb372b313eb6bd9562c5fa5d2aeb1ae5bac20a51d619 WatchSource:0}: Error finding container 1d4f6287093b36cb2637cb372b313eb6bd9562c5fa5d2aeb1ae5bac20a51d619: Status 404 returned error can't find the container with id 1d4f6287093b36cb2637cb372b313eb6bd9562c5fa5d2aeb1ae5bac20a51d619 Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.145494 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-cdzxd" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.154345 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-szhwx"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.159767 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" podStartSLOduration=129.159750606 podStartE2EDuration="2m9.159750606s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.159316733 +0000 UTC m=+154.854863842" watchObservedRunningTime="2026-01-30 08:11:56.159750606 +0000 UTC m=+154.855297715" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.170119 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.204219 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.231479 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.231707 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.731667895 +0000 UTC m=+155.427215004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.231977 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.232539 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.732528871 +0000 UTC m=+155.428075980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: W0130 08:11:56.240132 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode740ffac_368d_45d5_89a8_25d370581945.slice/crio-d5b145700f9be201c79c1ea577ddea45dedffbdfbc9768d25f803b9a284eafeb WatchSource:0}: Error finding container d5b145700f9be201c79c1ea577ddea45dedffbdfbc9768d25f803b9a284eafeb: Status 404 returned error can't find the container with id d5b145700f9be201c79c1ea577ddea45dedffbdfbc9768d25f803b9a284eafeb Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.289729 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2l7mq"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.297022 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg"] Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.332999 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.333119 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.833090394 +0000 UTC m=+155.528637503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.333997 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.334429 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.834413844 +0000 UTC m=+155.529960953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: W0130 08:11:56.359075 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a27619_258e_4bed_afb0_1706904c6f9d.slice/crio-6046da1499505b13ecbf857cd5774cec4a983a18a644f4c11ced33d3d17c2dc7 WatchSource:0}: Error finding container 6046da1499505b13ecbf857cd5774cec4a983a18a644f4c11ced33d3d17c2dc7: Status 404 returned error can't find the container with id 6046da1499505b13ecbf857cd5774cec4a983a18a644f4c11ced33d3d17c2dc7 Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.435440 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.435575 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.935549604 +0000 UTC m=+155.631096713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.435724 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.436106 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:56.9360983 +0000 UTC m=+155.631645409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.536745 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.537149 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.037130938 +0000 UTC m=+155.732678047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.573224 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" event={"ID":"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72","Type":"ContainerStarted","Data":"2b8c6d0d251f89dca3e2b2cf029ae7c867dfac014052789646e8a14dc4db877f"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.578192 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mnsdp" event={"ID":"a58a222f-98a0-46b4-9ea8-36a922f6a349","Type":"ContainerStarted","Data":"ed89b14a22a777a3fe4673e13d2f17c414aa363605f67f6dbc02b53bacb1dc79"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.586181 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" event={"ID":"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2","Type":"ContainerStarted","Data":"bc9a22ef20f71808a3262567697360e8e3a67bdac6f2914c62a14ad9e0a6d13e"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.586301 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" event={"ID":"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2","Type":"ContainerStarted","Data":"3ca46cade8d96a03abd528afe76eaff05d47d26d90f9fb94c295000c92ddc4c0"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.588206 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" event={"ID":"f6d9ba19-88ea-489c-9f03-918e8b225e3b","Type":"ContainerStarted","Data":"8184c740e6f01b41b1ba638aa4e9de1ad5d506a2e0544a861440c012bf6db070"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.589363 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" event={"ID":"93fd6b37-eee2-4fd5-aa18-51eecea65a3b","Type":"ContainerStarted","Data":"1d4f6287093b36cb2637cb372b313eb6bd9562c5fa5d2aeb1ae5bac20a51d619"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.593689 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-szhwx" event={"ID":"a8a1f91a-b48e-442f-9ab6-d704b3927315","Type":"ContainerStarted","Data":"dc63f0a9d09e2f4a8a3ef0910de53b166a39b35eddcc581b73707d038096142a"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.603683 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" event={"ID":"4c6eb485-ba6b-4f6a-8b30-a9d1da93b21e","Type":"ContainerStarted","Data":"f0aebbe0474a3162ab4bcab185203df3f074bf749e9e5d63d6da4ddca1de2204"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.611679 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" event={"ID":"d4876c72-6cd1-43e0-b44a-45c4bd69e91f","Type":"ContainerStarted","Data":"102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.613952 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ssxgx" event={"ID":"02c6a6cf-5413-4524-a86c-11fa4a19821f","Type":"ContainerStarted","Data":"aca766ffc64446f9915e30314c15cacd7356287ff174b7d03d0ec4672b1e2b47"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.616182 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" event={"ID":"38a2c6cb-fd9d-42f6-8774-647c544bd0f9","Type":"ContainerStarted","Data":"7e6196d52b3ae3ba4ea0ca83ff2e8c24460d40875921197226d496ba025d29d2"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.616231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" event={"ID":"38a2c6cb-fd9d-42f6-8774-647c544bd0f9","Type":"ContainerStarted","Data":"33351ac2c30e695815d9f91b72215544b26c926007cc52ebb8d5ab63cdf14cf9"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.627160 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-l6p59" podStartSLOduration=130.627137911 podStartE2EDuration="2m10.627137911s" podCreationTimestamp="2026-01-30 08:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.626844432 +0000 UTC m=+155.322391551" watchObservedRunningTime="2026-01-30 08:11:56.627137911 +0000 UTC m=+155.322685020" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.638076 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" event={"ID":"539f32b7-3075-49f4-b9f6-e63ac1d76d61","Type":"ContainerStarted","Data":"213bc74675eba6ab680346a182a37c2671b6f7a547a41ca076d57df4ed3db570"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.638457 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" event={"ID":"539f32b7-3075-49f4-b9f6-e63ac1d76d61","Type":"ContainerStarted","Data":"a21e071cdedb2464c3e329e037d2e88f2545eacc6c0da5b957ceabc790a4d541"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.639824 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.640350 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.140330479 +0000 UTC m=+155.835877588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.651766 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" event={"ID":"53a09b74-1b42-4535-a853-0752b6d1f90a","Type":"ContainerStarted","Data":"dfbb14b3761f36ee8e53212a7b374ccf14d9ba35d639bbc27baf76ccdeee4c61"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.656437 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" event={"ID":"0373f9a1-1537-4f29-905a-b0fb2affc113","Type":"ContainerStarted","Data":"ef577f206b990cd40e889159cb224ec7ccc76453e9f23d1415ab27b1088bc3fd"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.659449 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" event={"ID":"8ede517d-773d-4f0b-8c0a-42ae13359f95","Type":"ContainerStarted","Data":"97e57ef1c7ce66b357f1ed6e1f8847cdeaccd06e556466aa82594a1548b78355"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.659495 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" event={"ID":"8ede517d-773d-4f0b-8c0a-42ae13359f95","Type":"ContainerStarted","Data":"3b948b615fba724f1687e73e5fcca06ca297443c072f9ebaf1a3471eb522792b"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.660484 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.661835 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jh9j6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.661884 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.662647 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" event={"ID":"faa3ca31-2951-4f0d-84f0-0b19a32c9927","Type":"ContainerStarted","Data":"e1356874b5553c3de0e80c8027a34dccd9503ab10c1652d6896f2d3991991483"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.682396 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" event={"ID":"836ae3f6-06f5-4996-9f9c-cacfb63fe855","Type":"ContainerStarted","Data":"3c42782e2042e17a62aba7a88551bb76c4354f2317c7baf6a90dbef09771bb79"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.695273 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" event={"ID":"e740ffac-368d-45d5-89a8-25d370581945","Type":"ContainerStarted","Data":"d5b145700f9be201c79c1ea577ddea45dedffbdfbc9768d25f803b9a284eafeb"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.702752 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" podStartSLOduration=130.702728029 podStartE2EDuration="2m10.702728029s" podCreationTimestamp="2026-01-30 08:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.701134292 +0000 UTC m=+155.396681401" watchObservedRunningTime="2026-01-30 08:11:56.702728029 +0000 UTC m=+155.398275138" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.725702 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" event={"ID":"9624fd43-bfa5-42c8-bebd-95a89988847d","Type":"ContainerStarted","Data":"819d0a3ba0aba2847ea6866af6b434dc03a10af1100b9de4cf8a2b6c90cd7fc0"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.725752 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" event={"ID":"9624fd43-bfa5-42c8-bebd-95a89988847d","Type":"ContainerStarted","Data":"43a212698789375e32b58e7d2ff8c87558474660a036cc20b6d99b35d2a87a79"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.736834 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" podStartSLOduration=129.736810183 podStartE2EDuration="2m9.736810183s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.725622884 +0000 UTC m=+155.421169993" watchObservedRunningTime="2026-01-30 08:11:56.736810183 +0000 UTC m=+155.432357292" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.744167 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.746364 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.246340624 +0000 UTC m=+155.941887733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.757990 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" podStartSLOduration=129.757965266 podStartE2EDuration="2m9.757965266s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.754543395 +0000 UTC m=+155.450090504" watchObservedRunningTime="2026-01-30 08:11:56.757965266 +0000 UTC m=+155.453512365" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.760126 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" event={"ID":"15eddd48-9a41-41cb-a284-80d01c7f8aad","Type":"ContainerStarted","Data":"a661d6d4097ab60ce214d8b9995fb7404122bfb23be4da944f9d5015d57b7db1"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.760350 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.787975 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:11:56 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:11:56 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:11:56 crc kubenswrapper[4870]: healthz check failed Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.788466 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.789009 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vn7wx" podStartSLOduration=130.788984701 podStartE2EDuration="2m10.788984701s" podCreationTimestamp="2026-01-30 08:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.788444214 +0000 UTC m=+155.483991333" watchObservedRunningTime="2026-01-30 08:11:56.788984701 +0000 UTC m=+155.484531810" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.800735 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v7bvt" event={"ID":"78280554-7b5b-4ccf-a674-2664144e4f5a","Type":"ContainerStarted","Data":"7ac8837b500e7e99f2a04037c1cd17673291f48d58dc455d6093994bf9822a5b"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.821065 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" podStartSLOduration=129.821045006 podStartE2EDuration="2m9.821045006s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.819761448 +0000 UTC m=+155.515308557" watchObservedRunningTime="2026-01-30 08:11:56.821045006 +0000 UTC m=+155.516592115" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.826333 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" event={"ID":"e64f35db-e72b-4d73-b501-7c2aff5cc609","Type":"ContainerStarted","Data":"d5afab07dc1508ba107c7017534fb7ab1bb2586349374540103867bc099e0d5a"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.826404 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" event={"ID":"e64f35db-e72b-4d73-b501-7c2aff5cc609","Type":"ContainerStarted","Data":"a2a4fcc7e4c608ac838c216112233cee8b4ddd3caaf5a3361e25bb63b66c7706"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.829603 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" event={"ID":"46d623aa-7e54-4c20-aed3-3f125395a073","Type":"ContainerStarted","Data":"7df8b42d217ab65278902d2dcee76abde43d6768681053cb4e71d635d19c2d5f"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.831600 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" event={"ID":"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca","Type":"ContainerStarted","Data":"e250f73a121c440b98359ca6a9970a763e46ca651a33f5e4c1db99fb7006c06b"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.832902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" event={"ID":"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7","Type":"ContainerStarted","Data":"a77c631c25ab71f0cef3c69513d9b0866e7e0d3305252072a16f62fc9dac93dd"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.832929 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" event={"ID":"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7","Type":"ContainerStarted","Data":"c3d25320838bc55388c93ea63e175ab91cf4b33328f8715faa7380d5ef4ae27f"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.834836 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.834924 4870 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kkf4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.834958 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.840615 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mj87" event={"ID":"2aa49ce7-f902-408a-94f1-da14a661e813","Type":"ContainerStarted","Data":"8f4187e8ca6a92ee4bd9e6838556b7bbedaba64d18a5aff0c37ced233ebdc3dd"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.840662 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mj87" event={"ID":"2aa49ce7-f902-408a-94f1-da14a661e813","Type":"ContainerStarted","Data":"11900425e10bfa9bf6c9c649d5dac8048b1ed7e104a45655b98935b712a80d21"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.848511 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.851190 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.351169694 +0000 UTC m=+156.046716983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.852386 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" event={"ID":"78a27619-258e-4bed-afb0-1706904c6f9d","Type":"ContainerStarted","Data":"6046da1499505b13ecbf857cd5774cec4a983a18a644f4c11ced33d3d17c2dc7"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.862706 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" event={"ID":"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8","Type":"ContainerStarted","Data":"ee2ec0b0e847b19de186b8dd8d6e1ea29b0e49e8543b060582c2194a4c0976cc"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.880144 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" event={"ID":"a361e11a-9e2f-4abf-a8c1-783f328f13a9","Type":"ContainerStarted","Data":"f87c9bc6db7756626c3bb355d208bad5fa6ff7a04ae5ba924ce27ef55220d78e"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.882010 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" podStartSLOduration=129.881988732 podStartE2EDuration="2m9.881988732s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.852125312 +0000 UTC m=+155.547672421" watchObservedRunningTime="2026-01-30 08:11:56.881988732 +0000 UTC m=+155.577535841" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.882440 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" podStartSLOduration=129.882434725 podStartE2EDuration="2m9.882434725s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.879573131 +0000 UTC m=+155.575120240" watchObservedRunningTime="2026-01-30 08:11:56.882434725 +0000 UTC m=+155.577981834" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.932493 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" event={"ID":"7053ea40-6d30-41d8-bcb1-8f55e95feb22","Type":"ContainerStarted","Data":"a6cd1f74bc22f4d0359e860e5ebf6a968ae994407444db4a51eff755c334d1c8"} Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.956142 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.956596 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:56 crc kubenswrapper[4870]: E0130 08:11:56.960725 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.460689321 +0000 UTC m=+156.156236430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.990773 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2mj87" podStartSLOduration=129.990745017 podStartE2EDuration="2m9.990745017s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.941348582 +0000 UTC m=+155.636895691" watchObservedRunningTime="2026-01-30 08:11:56.990745017 +0000 UTC m=+155.686292116" Jan 30 08:11:56 crc kubenswrapper[4870]: I0130 08:11:56.995051 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nw244" podStartSLOduration=129.995018874 podStartE2EDuration="2m9.995018874s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:56.99084633 +0000 UTC m=+155.686393439" watchObservedRunningTime="2026-01-30 08:11:56.995018874 +0000 UTC m=+155.690565983" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.059206 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.079273 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.579250546 +0000 UTC m=+156.274797655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.172156 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.173008 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.672968378 +0000 UTC m=+156.368515487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.233688 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.277184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.277698 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.777675854 +0000 UTC m=+156.473222963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.378624 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.378963 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.878932088 +0000 UTC m=+156.574479197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.379723 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.380316 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.880297578 +0000 UTC m=+156.575844687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.480547 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.480715 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.980680307 +0000 UTC m=+156.676227416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.480997 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.481391 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:57.981372907 +0000 UTC m=+156.676920016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.581757 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.582177 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.082157948 +0000 UTC m=+156.777705057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.682961 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.683453 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.183436833 +0000 UTC m=+156.878983942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.757480 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:11:57 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:11:57 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:11:57 crc kubenswrapper[4870]: healthz check failed Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.757523 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.786445 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.786618 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.286593723 +0000 UTC m=+156.982140832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.786757 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.787113 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.287098578 +0000 UTC m=+156.982645697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.888160 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.888500 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.388481505 +0000 UTC m=+157.084028624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.939209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" event={"ID":"53a09b74-1b42-4535-a853-0752b6d1f90a","Type":"ContainerStarted","Data":"42ad20899b9f94ef4a43eeecc11f3c9836d9eb8489f0cd70f886594b65e81c81"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.939307 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.941421 4870 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-fn5hp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.941465 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" podUID="53a09b74-1b42-4535-a853-0752b6d1f90a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.942718 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" event={"ID":"38a2c6cb-fd9d-42f6-8774-647c544bd0f9","Type":"ContainerStarted","Data":"944442945f4de7f04d0f44fb50c9908a21982de8b10fe905cf5ec5fc7f4bb92e"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.944809 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v7bvt" event={"ID":"78280554-7b5b-4ccf-a674-2664144e4f5a","Type":"ContainerStarted","Data":"f0f3f59b5cafdfae5225f86a90c6cf9ae1908ad7f3a92beb7a19c44d0f5332d1"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.945150 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.946973 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-v7bvt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.947341 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v7bvt" podUID="78280554-7b5b-4ccf-a674-2664144e4f5a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.948504 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qnjxg" event={"ID":"e64f35db-e72b-4d73-b501-7c2aff5cc609","Type":"ContainerStarted","Data":"219e045fac50283e20f42a18f9f0dfc83abd0a06efcfe2cdf1d02b73acaca0d6"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.960690 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" event={"ID":"1a6267b2-1222-4c0b-a890-c146d83b583d","Type":"ContainerStarted","Data":"a347357227dc94f9cad66b311dadd61011549ff3c1637fd2e5f1aa9526187ca5"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.960742 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" event={"ID":"1a6267b2-1222-4c0b-a890-c146d83b583d","Type":"ContainerStarted","Data":"cdda73ed7cff4b1a880889d1305b1270124964338ac480926363c03599fb46c9"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.960752 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" event={"ID":"1a6267b2-1222-4c0b-a890-c146d83b583d","Type":"ContainerStarted","Data":"b526c151540d4bd0d2349416bad116d4bf94f9f25a3754776cc42e0135c7a373"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.965551 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.965940 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.968952 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" podStartSLOduration=130.968941407 podStartE2EDuration="2m10.968941407s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:57.968254447 +0000 UTC m=+156.663801556" watchObservedRunningTime="2026-01-30 08:11:57.968941407 +0000 UTC m=+156.664488516" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.969358 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" event={"ID":"836ae3f6-06f5-4996-9f9c-cacfb63fe855","Type":"ContainerStarted","Data":"3c644e1da743db45843a7c89410d1e8d0cc7ae64f0ecbd61d7339e44b4a5a820"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.969798 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.971675 4870 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jkpz9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.971742 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" podUID="836ae3f6-06f5-4996-9f9c-cacfb63fe855" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.977388 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" event={"ID":"6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72","Type":"ContainerStarted","Data":"27068d120c5196dbb3136eda3bb61411f909927230fb86187a8aad48f808fba7"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.985145 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mnsdp" event={"ID":"a58a222f-98a0-46b4-9ea8-36a922f6a349","Type":"ContainerStarted","Data":"1b390f3142180446c1b693055b28cc2900ebef38b1a73ef2a44332d6957e844f"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.987638 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" event={"ID":"7914d7ba-40e4-4ada-9dd4-66f3d86d5dc8","Type":"ContainerStarted","Data":"1a1f2688b1bc0f658fd75296f7d41dffb74f506795545c277ef33e5ab318d257"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.990000 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:57 crc kubenswrapper[4870]: E0130 08:11:57.992206 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.492187692 +0000 UTC m=+157.187734891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.999467 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" event={"ID":"7053ea40-6d30-41d8-bcb1-8f55e95feb22","Type":"ContainerStarted","Data":"683d759e6003179c4d952f6e8ddb3547a76cf803b0597363261354aee3390294"} Jan 30 08:11:57 crc kubenswrapper[4870]: I0130 08:11:57.999543 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" event={"ID":"7053ea40-6d30-41d8-bcb1-8f55e95feb22","Type":"ContainerStarted","Data":"76081e0a15d9d1488f09e8b04bec447ab3f086735cd5d98bedf2374079c1e3c8"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.000423 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.003292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-szhwx" event={"ID":"a8a1f91a-b48e-442f-9ab6-d704b3927315","Type":"ContainerStarted","Data":"926ec3b855a5eaa67c84948cf649ddbbf18aaa9552ca1d006f47ad74c5fccdd2"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.006205 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" event={"ID":"78a27619-258e-4bed-afb0-1706904c6f9d","Type":"ContainerStarted","Data":"d1e71e306955b9f4387a716e21ede390d73cd63bf2646abc6ae46f2bd401f303"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.012634 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-v7bvt" podStartSLOduration=131.012609984 podStartE2EDuration="2m11.012609984s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.004085063 +0000 UTC m=+156.699632172" watchObservedRunningTime="2026-01-30 08:11:58.012609984 +0000 UTC m=+156.708157083" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.024698 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" event={"ID":"46d623aa-7e54-4c20-aed3-3f125395a073","Type":"ContainerStarted","Data":"c75bd9b3416c21f2110f1fd2fdc5b1f4b93aec54cc03b5e05f9582a8f0fb8b8a"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.025331 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.029006 4870 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dhpbr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.029054 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" podUID="46d623aa-7e54-4c20-aed3-3f125395a073" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.036319 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" event={"ID":"f6d9ba19-88ea-489c-9f03-918e8b225e3b","Type":"ContainerStarted","Data":"f12a45c0b2813701cf01d7ec25541511ec2cb7be0c7a627f832aa65d24825d35"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.036376 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" event={"ID":"f6d9ba19-88ea-489c-9f03-918e8b225e3b","Type":"ContainerStarted","Data":"c0b22b82bacefd0aaf817979a115cac87fd828c84574bd2a3b27b98ff1218cb9"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.041649 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" event={"ID":"e740ffac-368d-45d5-89a8-25d370581945","Type":"ContainerStarted","Data":"249e9d632198441fe97737715cfaab4c4ef133846240467d86fd289f4ffcd016"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.042835 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" event={"ID":"93fd6b37-eee2-4fd5-aa18-51eecea65a3b","Type":"ContainerStarted","Data":"2065e95b92696a8bb664d6087b11271d4f8873eafbf3cee077ccf40c2dbf8d79"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.043064 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g6x2r" podStartSLOduration=131.043049441 podStartE2EDuration="2m11.043049441s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.041079333 +0000 UTC m=+156.736626442" watchObservedRunningTime="2026-01-30 08:11:58.043049441 +0000 UTC m=+156.738596550" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.056430 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" event={"ID":"539f32b7-3075-49f4-b9f6-e63ac1d76d61","Type":"ContainerStarted","Data":"328a07eeea88199302b8fed3e521523484c74721deba35a26a97335a3bf02afa"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.071515 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" event={"ID":"e2bdbb40-bece-420f-9ff7-bdeff90c8bd2","Type":"ContainerStarted","Data":"6cd585c0386c3e8287203c7e53c36c1a328f83d7c7bde25c1337303931a0e717"} Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.075180 4870 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kkf4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.075224 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.090946 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.092357 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.592326384 +0000 UTC m=+157.287873493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.096800 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jh9j6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.096856 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.124474 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zghdb" podStartSLOduration=131.12445674 podStartE2EDuration="2m11.12445674s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.122521674 +0000 UTC m=+156.818068783" watchObservedRunningTime="2026-01-30 08:11:58.12445674 +0000 UTC m=+156.820003849" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.124721 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tsghw" podStartSLOduration=131.124715668 podStartE2EDuration="2m11.124715668s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.081755312 +0000 UTC m=+156.777302421" watchObservedRunningTime="2026-01-30 08:11:58.124715668 +0000 UTC m=+156.820262777" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.195679 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.216617 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.716593106 +0000 UTC m=+157.412140215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.233537 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ll2mq" podStartSLOduration=131.233499564 podStartE2EDuration="2m11.233499564s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.176473244 +0000 UTC m=+156.872020353" watchObservedRunningTime="2026-01-30 08:11:58.233499564 +0000 UTC m=+156.929046673" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.234396 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kvwdd" podStartSLOduration=131.234384391 podStartE2EDuration="2m11.234384391s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.232069713 +0000 UTC m=+156.927616832" watchObservedRunningTime="2026-01-30 08:11:58.234384391 +0000 UTC m=+156.929931500" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.300776 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.301294 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.801248851 +0000 UTC m=+157.496796120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.310252 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" podStartSLOduration=131.310234996 podStartE2EDuration="2m11.310234996s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.278668986 +0000 UTC m=+156.974216105" watchObservedRunningTime="2026-01-30 08:11:58.310234996 +0000 UTC m=+157.005782105" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.322064 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.322562 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.344921 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" podStartSLOduration=131.344896507 podStartE2EDuration="2m11.344896507s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.317818909 +0000 UTC m=+157.013366018" watchObservedRunningTime="2026-01-30 08:11:58.344896507 +0000 UTC m=+157.040443616" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.373689 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xvt4n" podStartSLOduration=131.373659395 podStartE2EDuration="2m11.373659395s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.347917896 +0000 UTC m=+157.043465015" watchObservedRunningTime="2026-01-30 08:11:58.373659395 +0000 UTC m=+157.069206494" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.395286 4870 csr.go:261] certificate signing request csr-f64lz is approved, waiting to be issued Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.402012 4870 csr.go:257] certificate signing request csr-f64lz is issued Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.410409 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.410928 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:58.910910363 +0000 UTC m=+157.606457462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.416106 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-szhwx" podStartSLOduration=7.416080726 podStartE2EDuration="7.416080726s" podCreationTimestamp="2026-01-30 08:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.37756057 +0000 UTC m=+157.073107679" watchObservedRunningTime="2026-01-30 08:11:58.416080726 +0000 UTC m=+157.111627835" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.453194 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ps5nw" podStartSLOduration=131.453179289 podStartE2EDuration="2m11.453179289s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.450847941 +0000 UTC m=+157.146395050" watchObservedRunningTime="2026-01-30 08:11:58.453179289 +0000 UTC m=+157.148726398" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.454165 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" podStartSLOduration=131.454159717 podStartE2EDuration="2m11.454159717s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.418149206 +0000 UTC m=+157.113696315" watchObservedRunningTime="2026-01-30 08:11:58.454159717 +0000 UTC m=+157.149706816" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.483441 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" podStartSLOduration=131.48341124 podStartE2EDuration="2m11.48341124s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.483260535 +0000 UTC m=+157.178807644" watchObservedRunningTime="2026-01-30 08:11:58.48341124 +0000 UTC m=+157.178958349" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.507447 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vzzk7" podStartSLOduration=131.507411517 podStartE2EDuration="2m11.507411517s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.506318635 +0000 UTC m=+157.201865754" watchObservedRunningTime="2026-01-30 08:11:58.507411517 +0000 UTC m=+157.202958626" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.511344 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.511739 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.011723405 +0000 UTC m=+157.707270514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.565974 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2l7mq" podStartSLOduration=131.565951302 podStartE2EDuration="2m11.565951302s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:58.561992916 +0000 UTC m=+157.257540015" watchObservedRunningTime="2026-01-30 08:11:58.565951302 +0000 UTC m=+157.261498411" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.613434 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.614106 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.114082101 +0000 UTC m=+157.809629210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.714850 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.715062 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.215023536 +0000 UTC m=+157.910570645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.715106 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.715509 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.215501 +0000 UTC m=+157.911048109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.756576 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:11:58 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:11:58 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:11:58 crc kubenswrapper[4870]: healthz check failed Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.756675 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.816328 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.816609 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.316565179 +0000 UTC m=+158.012112288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.816965 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.817328 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.317313151 +0000 UTC m=+158.012860260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.867930 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.918092 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.918333 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.418292357 +0000 UTC m=+158.113839476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:58 crc kubenswrapper[4870]: I0130 08:11:58.918427 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:58 crc kubenswrapper[4870]: E0130 08:11:58.918962 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.418949246 +0000 UTC m=+158.114496535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.020510 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.020726 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.520698945 +0000 UTC m=+158.216246054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.020835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.021147 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.521139868 +0000 UTC m=+158.216686977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.080195 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mnsdp" event={"ID":"a58a222f-98a0-46b4-9ea8-36a922f6a349","Type":"ContainerStarted","Data":"0bb5c8bc6036bcb97d1a72d9dbc36f24717b2c952da882e0470f6832f1b5ef82"} Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.080399 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mnsdp" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.083108 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" event={"ID":"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca","Type":"ContainerStarted","Data":"03e2b42f7c8b9aa3ab6e729b31a40a3da732232ecf1e060e15a754b40c3c4965"} Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.084921 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-v7bvt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.084987 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v7bvt" podUID="78280554-7b5b-4ccf-a674-2664144e4f5a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.084932 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jh9j6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.085336 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.094431 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fn5hp" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.099399 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-brrxb" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.100025 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jkpz9" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.113031 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mnsdp" podStartSLOduration=9.113006226 podStartE2EDuration="9.113006226s" podCreationTimestamp="2026-01-30 08:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:11:59.110950935 +0000 UTC m=+157.806498044" watchObservedRunningTime="2026-01-30 08:11:59.113006226 +0000 UTC m=+157.808553335" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.121699 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.122007 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.62196321 +0000 UTC m=+158.317510319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.122301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.122787 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.622778184 +0000 UTC m=+158.318325283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.123300 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.223592 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.223796 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.72376531 +0000 UTC m=+158.419312409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.224169 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.226493 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.72647079 +0000 UTC m=+158.422017899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.328799 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.329843 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.829818196 +0000 UTC m=+158.525365305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.403989 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 08:06:58 +0000 UTC, rotation deadline is 2026-10-16 22:45:05.123217466 +0000 UTC Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.404041 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6230h33m5.719178943s for next certificate rotation Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.433575 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.433962 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:11:59.933950024 +0000 UTC m=+158.629497133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.535038 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.535432 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.035410565 +0000 UTC m=+158.730957664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.636654 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.637086 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.137069961 +0000 UTC m=+158.832617070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.697429 4870 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2g2tj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]log ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]etcd ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/max-in-flight-filter ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 08:11:59 crc kubenswrapper[4870]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 08:11:59 crc kubenswrapper[4870]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/openshift.io-startinformers ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 08:11:59 crc kubenswrapper[4870]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 08:11:59 crc kubenswrapper[4870]: livez check failed Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.697529 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" podUID="0373f9a1-1537-4f29-905a-b0fb2affc113" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.737911 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.738164 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.23811294 +0000 UTC m=+158.933660049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.738209 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.738623 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.238614284 +0000 UTC m=+158.934161393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.760847 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:11:59 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:11:59 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:11:59 crc kubenswrapper[4870]: healthz check failed Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.760975 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.839518 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.840491 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.340459035 +0000 UTC m=+159.036006144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.882848 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dhpbr" Jan 30 08:11:59 crc kubenswrapper[4870]: I0130 08:11:59.945627 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:11:59 crc kubenswrapper[4870]: E0130 08:11:59.946130 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.446114219 +0000 UTC m=+159.141661318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.046833 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.047275 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.54725174 +0000 UTC m=+159.242798839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.089725 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" event={"ID":"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca","Type":"ContainerStarted","Data":"2b70301241ea60405c8b0b8013c7390193ae81f6f401fa7f63a18489051b6db7"} Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.148553 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.150979 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.650964977 +0000 UTC m=+159.346512086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.249691 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.250108 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.750085799 +0000 UTC m=+159.445632908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.353538 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.353980 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.85396312 +0000 UTC m=+159.549510229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.354155 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rk4lj"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.364532 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.367422 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.377212 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk4lj"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.433332 4870 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.454434 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.454664 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-catalog-content\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.454688 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg24l\" (UniqueName: \"kubernetes.io/projected/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-kube-api-access-rg24l\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.454721 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.954688059 +0000 UTC m=+159.650235168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.454783 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.454909 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-utilities\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.455526 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:00.955518254 +0000 UTC m=+159.651065573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.521218 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cx2x5"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.522539 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.538778 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.555847 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.556135 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-catalog-content\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.556160 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg24l\" (UniqueName: \"kubernetes.io/projected/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-kube-api-access-rg24l\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.556203 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-utilities\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.556616 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-utilities\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.556695 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.056679934 +0000 UTC m=+159.752227043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.556915 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-catalog-content\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.562171 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx2x5"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.606864 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg24l\" (UniqueName: \"kubernetes.io/projected/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-kube-api-access-rg24l\") pod \"certified-operators-rk4lj\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.659362 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-catalog-content\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.659493 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.659552 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-utilities\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.659603 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44g22\" (UniqueName: \"kubernetes.io/projected/258d3e35-5580-4108-889c-9d5d2f80c810-kube-api-access-44g22\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.660033 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.16001456 +0000 UTC m=+159.855561669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.685416 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.728138 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sdlrf"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.729067 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.754905 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:00 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:00 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:00 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.754980 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.761697 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.762419 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-catalog-content\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.762563 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-utilities\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.762626 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44g22\" (UniqueName: \"kubernetes.io/projected/258d3e35-5580-4108-889c-9d5d2f80c810-kube-api-access-44g22\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.763530 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-catalog-content\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.763597 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-utilities\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.764645 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.264622213 +0000 UTC m=+159.960169322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.767709 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sdlrf"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.846079 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44g22\" (UniqueName: \"kubernetes.io/projected/258d3e35-5580-4108-889c-9d5d2f80c810-kube-api-access-44g22\") pod \"community-operators-cx2x5\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.864021 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-utilities\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.864088 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wlfz\" (UniqueName: \"kubernetes.io/projected/e02d35f8-2e8c-47a3-87c9-9580ab766290-kube-api-access-6wlfz\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.864151 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.864187 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-catalog-content\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.864631 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.36460598 +0000 UTC m=+160.060153149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.926425 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vm685"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.927768 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.948736 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vm685"] Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.970753 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.970977 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-catalog-content\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.971007 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-catalog-content\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.971045 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-utilities\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.971084 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-utilities\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.971102 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhq7x\" (UniqueName: \"kubernetes.io/projected/abc41080-75c5-421f-baa8-f05792f74564-kube-api-access-lhq7x\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.971128 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wlfz\" (UniqueName: \"kubernetes.io/projected/e02d35f8-2e8c-47a3-87c9-9580ab766290-kube-api-access-6wlfz\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.971791 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-catalog-content\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:00 crc kubenswrapper[4870]: E0130 08:12:00.971956 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.471925113 +0000 UTC m=+160.167472222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:00 crc kubenswrapper[4870]: I0130 08:12:00.979054 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-utilities\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.026817 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wlfz\" (UniqueName: \"kubernetes.io/projected/e02d35f8-2e8c-47a3-87c9-9580ab766290-kube-api-access-6wlfz\") pod \"certified-operators-sdlrf\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.053340 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.075443 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.075488 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-catalog-content\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.075528 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-utilities\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.075558 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhq7x\" (UniqueName: \"kubernetes.io/projected/abc41080-75c5-421f-baa8-f05792f74564-kube-api-access-lhq7x\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.076644 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-catalog-content\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: E0130 08:12:01.077138 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.577116253 +0000 UTC m=+160.272663362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sfs65" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.077598 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-utilities\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.094608 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhq7x\" (UniqueName: \"kubernetes.io/projected/abc41080-75c5-421f-baa8-f05792f74564-kube-api-access-lhq7x\") pod \"community-operators-vm685\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.121781 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" event={"ID":"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca","Type":"ContainerStarted","Data":"14c69f5ef2c2a59869866261a25048d69105089421652bcd3d4a89fbbc8d330f"} Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.121837 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" event={"ID":"20fcc16b-f2b2-4a33-a8b2-567bec77d7ca","Type":"ContainerStarted","Data":"7f406aa6747911baa02be123e8dcdce8d7bdbcfb4199af028d16098d4b5732c3"} Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.142439 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.177518 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.177946 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k7wnt" podStartSLOduration=11.177917014 podStartE2EDuration="11.177917014s" podCreationTimestamp="2026-01-30 08:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:01.174636038 +0000 UTC m=+159.870183147" watchObservedRunningTime="2026-01-30 08:12:01.177917014 +0000 UTC m=+159.873464123" Jan 30 08:12:01 crc kubenswrapper[4870]: E0130 08:12:01.178649 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:12:01.678632365 +0000 UTC m=+160.374179474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.202416 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk4lj"] Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.269440 4870 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T08:12:00.43336164Z","Handler":null,"Name":""} Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.278985 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.279050 4870 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.279089 4870 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.284756 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.286469 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.286503 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.334237 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sfs65\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.380849 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.401043 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.435399 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.487509 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sdlrf"] Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.686014 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx2x5"] Jan 30 08:12:01 crc kubenswrapper[4870]: W0130 08:12:01.704248 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod258d3e35_5580_4108_889c_9d5d2f80c810.slice/crio-ea85190d876bcbca144726c237a14b6d31ba3248e8f165a1e622d666e72b6022 WatchSource:0}: Error finding container ea85190d876bcbca144726c237a14b6d31ba3248e8f165a1e622d666e72b6022: Status 404 returned error can't find the container with id ea85190d876bcbca144726c237a14b6d31ba3248e8f165a1e622d666e72b6022 Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.770996 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vm685"] Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.780542 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:01 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:01 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:01 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.780596 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.856362 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sfs65"] Jan 30 08:12:01 crc kubenswrapper[4870]: I0130 08:12:01.949283 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8nfd4" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.083911 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.133847 4870 generic.go:334] "Generic (PLEG): container finished" podID="93fd6b37-eee2-4fd5-aa18-51eecea65a3b" containerID="2065e95b92696a8bb664d6087b11271d4f8873eafbf3cee077ccf40c2dbf8d79" exitCode=0 Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.134005 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" event={"ID":"93fd6b37-eee2-4fd5-aa18-51eecea65a3b","Type":"ContainerDied","Data":"2065e95b92696a8bb664d6087b11271d4f8873eafbf3cee077ccf40c2dbf8d79"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.137017 4870 generic.go:334] "Generic (PLEG): container finished" podID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerID="1ecf5db22e2b1fa8547549ce582ecddde377bbe670b1f97e03e9e9e6f42d4dae" exitCode=0 Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.137690 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdlrf" event={"ID":"e02d35f8-2e8c-47a3-87c9-9580ab766290","Type":"ContainerDied","Data":"1ecf5db22e2b1fa8547549ce582ecddde377bbe670b1f97e03e9e9e6f42d4dae"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.137798 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdlrf" event={"ID":"e02d35f8-2e8c-47a3-87c9-9580ab766290","Type":"ContainerStarted","Data":"17d6a9bdca6c16fe2977a640455c60bcc06dd2ad4ecdc2b9c6411506d215c0be"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.143749 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.144663 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" event={"ID":"406fb8be-c783-4ef8-8aae-5430b0226d17","Type":"ContainerStarted","Data":"5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.144762 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" event={"ID":"406fb8be-c783-4ef8-8aae-5430b0226d17","Type":"ContainerStarted","Data":"47d84e04f9b3f93637b83fdd855c471e56293ba330cba3caf1369ea3f8340bb4"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.144931 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.148313 4870 generic.go:334] "Generic (PLEG): container finished" podID="abc41080-75c5-421f-baa8-f05792f74564" containerID="c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354" exitCode=0 Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.148404 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm685" event={"ID":"abc41080-75c5-421f-baa8-f05792f74564","Type":"ContainerDied","Data":"c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.148444 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm685" event={"ID":"abc41080-75c5-421f-baa8-f05792f74564","Type":"ContainerStarted","Data":"96518355e0bd9b243a322652ed93adea62f75e712bc08772e1f193f3dde1d1a9"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.151045 4870 generic.go:334] "Generic (PLEG): container finished" podID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerID="2e15cf3e43d60efa400786600f10aabddcac1a402cf20155c96332c4d505ad73" exitCode=0 Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.151119 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4lj" event={"ID":"ba2950a4-e1b9-45a9-9980-1b4169e0fb16","Type":"ContainerDied","Data":"2e15cf3e43d60efa400786600f10aabddcac1a402cf20155c96332c4d505ad73"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.151148 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4lj" event={"ID":"ba2950a4-e1b9-45a9-9980-1b4169e0fb16","Type":"ContainerStarted","Data":"b4deb94680d10a0e49b737adc1e5d0d479b58878615ce9ba8009bd204fb58e39"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.158499 4870 generic.go:334] "Generic (PLEG): container finished" podID="258d3e35-5580-4108-889c-9d5d2f80c810" containerID="b8e1ab4ce4d07cf81dd3964239182751d6d8a8cb595e0cabe44b1efd32e0f612" exitCode=0 Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.159832 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx2x5" event={"ID":"258d3e35-5580-4108-889c-9d5d2f80c810","Type":"ContainerDied","Data":"b8e1ab4ce4d07cf81dd3964239182751d6d8a8cb595e0cabe44b1efd32e0f612"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.159860 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx2x5" event={"ID":"258d3e35-5580-4108-889c-9d5d2f80c810","Type":"ContainerStarted","Data":"ea85190d876bcbca144726c237a14b6d31ba3248e8f165a1e622d666e72b6022"} Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.335456 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jqng8"] Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.337316 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.358426 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.367741 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqng8"] Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.378200 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.379116 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.399317 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.399347 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.426287 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.429314 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6vk\" (UniqueName: \"kubernetes.io/projected/56cb5ce8-da4f-4c24-9805-18a91b316bcd-kube-api-access-nz6vk\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.429501 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-utilities\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.429615 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442fd418-e9e8-4cda-8e47-0a2780ae306d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.429731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-catalog-content\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.429818 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/442fd418-e9e8-4cda-8e47-0a2780ae306d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.532819 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442fd418-e9e8-4cda-8e47-0a2780ae306d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.532923 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-catalog-content\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.532954 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/442fd418-e9e8-4cda-8e47-0a2780ae306d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.532973 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6vk\" (UniqueName: \"kubernetes.io/projected/56cb5ce8-da4f-4c24-9805-18a91b316bcd-kube-api-access-nz6vk\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.533006 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-utilities\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.533536 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-utilities\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.533591 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/442fd418-e9e8-4cda-8e47-0a2780ae306d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.533640 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-catalog-content\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.558716 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442fd418-e9e8-4cda-8e47-0a2780ae306d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.566918 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6vk\" (UniqueName: \"kubernetes.io/projected/56cb5ce8-da4f-4c24-9805-18a91b316bcd-kube-api-access-nz6vk\") pod \"redhat-marketplace-jqng8\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.579717 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" podStartSLOduration=135.579693398 podStartE2EDuration="2m15.579693398s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:02.559803582 +0000 UTC m=+161.255350691" watchObservedRunningTime="2026-01-30 08:12:02.579693398 +0000 UTC m=+161.275240507" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.667794 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.714303 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ddg46"] Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.715893 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.716799 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.734498 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddg46"] Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.735555 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-catalog-content\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.735681 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-utilities\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.735749 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sz97\" (UniqueName: \"kubernetes.io/projected/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-kube-api-access-4sz97\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.761428 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:02 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:02 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:02 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.761603 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.837864 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-catalog-content\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.838570 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-utilities\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.838640 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sz97\" (UniqueName: \"kubernetes.io/projected/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-kube-api-access-4sz97\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.840169 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-catalog-content\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.840421 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-utilities\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.862059 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sz97\" (UniqueName: \"kubernetes.io/projected/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-kube-api-access-4sz97\") pod \"redhat-marketplace-ddg46\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.970474 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:12:02 crc kubenswrapper[4870]: I0130 08:12:02.979121 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2g2tj" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.008054 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqng8"] Jan 30 08:12:03 crc kubenswrapper[4870]: W0130 08:12:03.009421 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56cb5ce8_da4f_4c24_9805_18a91b316bcd.slice/crio-5318a5759e8a4ecffb11be37d9689df0b960dc674f99fd5d3cb764e4f3066de3 WatchSource:0}: Error finding container 5318a5759e8a4ecffb11be37d9689df0b960dc674f99fd5d3cb764e4f3066de3: Status 404 returned error can't find the container with id 5318a5759e8a4ecffb11be37d9689df0b960dc674f99fd5d3cb764e4f3066de3 Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.041386 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.130282 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.176547 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqng8" event={"ID":"56cb5ce8-da4f-4c24-9805-18a91b316bcd","Type":"ContainerStarted","Data":"5318a5759e8a4ecffb11be37d9689df0b960dc674f99fd5d3cb764e4f3066de3"} Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.231304 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.239091 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.242151 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.244624 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.245156 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.349173 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.349291 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.457454 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.457707 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.457652 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.484893 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.524757 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.560196 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-secret-volume\") pod \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.560510 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qp2d\" (UniqueName: \"kubernetes.io/projected/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-kube-api-access-9qp2d\") pod \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.560703 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-config-volume\") pod \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\" (UID: \"93fd6b37-eee2-4fd5-aa18-51eecea65a3b\") " Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.561846 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-config-volume" (OuterVolumeSpecName: "config-volume") pod "93fd6b37-eee2-4fd5-aa18-51eecea65a3b" (UID: "93fd6b37-eee2-4fd5-aa18-51eecea65a3b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.567529 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "93fd6b37-eee2-4fd5-aa18-51eecea65a3b" (UID: "93fd6b37-eee2-4fd5-aa18-51eecea65a3b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.568108 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-kube-api-access-9qp2d" (OuterVolumeSpecName: "kube-api-access-9qp2d") pod "93fd6b37-eee2-4fd5-aa18-51eecea65a3b" (UID: "93fd6b37-eee2-4fd5-aa18-51eecea65a3b"). InnerVolumeSpecName "kube-api-access-9qp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.568638 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.627449 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.627501 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.643167 4870 patch_prober.go:28] interesting pod/console-f9d7485db-2mj87 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.643265 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2mj87" podUID="2aa49ce7-f902-408a-94f1-da14a661e813" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.668291 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.668325 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.668334 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qp2d\" (UniqueName: \"kubernetes.io/projected/93fd6b37-eee2-4fd5-aa18-51eecea65a3b-kube-api-access-9qp2d\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.690515 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddg46"] Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.710856 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-85lwg"] Jan 30 08:12:03 crc kubenswrapper[4870]: E0130 08:12:03.711405 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fd6b37-eee2-4fd5-aa18-51eecea65a3b" containerName="collect-profiles" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.711432 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd6b37-eee2-4fd5-aa18-51eecea65a3b" containerName="collect-profiles" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.711630 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fd6b37-eee2-4fd5-aa18-51eecea65a3b" containerName="collect-profiles" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.712899 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.715546 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.719789 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85lwg"] Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.750777 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.768378 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:03 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:03 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:03 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.768447 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.785304 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-v7bvt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.785397 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v7bvt" podUID="78280554-7b5b-4ccf-a674-2664144e4f5a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.786219 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-v7bvt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.786293 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v7bvt" podUID="78280554-7b5b-4ccf-a674-2664144e4f5a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.852379 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.879303 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drv8z\" (UniqueName: \"kubernetes.io/projected/1d50529a-bc06-49a9-a5bf-64e91e8734c2-kube-api-access-drv8z\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.879458 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-utilities\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.879542 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-catalog-content\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.980841 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drv8z\" (UniqueName: \"kubernetes.io/projected/1d50529a-bc06-49a9-a5bf-64e91e8734c2-kube-api-access-drv8z\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.980923 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-utilities\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.980986 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-catalog-content\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.981632 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-catalog-content\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:03 crc kubenswrapper[4870]: I0130 08:12:03.981718 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-utilities\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.008429 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drv8z\" (UniqueName: \"kubernetes.io/projected/1d50529a-bc06-49a9-a5bf-64e91e8734c2-kube-api-access-drv8z\") pod \"redhat-operators-85lwg\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.114011 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:12:04 crc kubenswrapper[4870]: W0130 08:12:04.120257 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd0bcbce8_6f90_4ccb_b5b6_163a3dd53675.slice/crio-738b24dab04ad1c07a253fcd86b532a3c395508eed7bde7827792ecb7b2579bd WatchSource:0}: Error finding container 738b24dab04ad1c07a253fcd86b532a3c395508eed7bde7827792ecb7b2579bd: Status 404 returned error can't find the container with id 738b24dab04ad1c07a253fcd86b532a3c395508eed7bde7827792ecb7b2579bd Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.124859 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.128154 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nxbdr"] Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.133235 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.144401 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxbdr"] Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.186590 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-catalog-content\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.186669 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-utilities\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.186713 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxfx\" (UniqueName: \"kubernetes.io/projected/f5255b75-6d10-40f0-9d11-c975458382cb-kube-api-access-hmxfx\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.237696 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675","Type":"ContainerStarted","Data":"738b24dab04ad1c07a253fcd86b532a3c395508eed7bde7827792ecb7b2579bd"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.274178 4870 generic.go:334] "Generic (PLEG): container finished" podID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerID="3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c" exitCode=0 Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.274259 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddg46" event={"ID":"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2","Type":"ContainerDied","Data":"3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.274292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddg46" event={"ID":"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2","Type":"ContainerStarted","Data":"a0bdc36a8576d5c25a0097622d42f72393c74577381da880313d27ca87e33cc7"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.288701 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxfx\" (UniqueName: \"kubernetes.io/projected/f5255b75-6d10-40f0-9d11-c975458382cb-kube-api-access-hmxfx\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.288932 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-catalog-content\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.289819 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-utilities\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.291400 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-catalog-content\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.291972 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-utilities\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.300331 4870 generic.go:334] "Generic (PLEG): container finished" podID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerID="8a3a4ecde2801a20f3bb4ccdc68bab1d46b831e5569a15eb1e5876330bbb7d42" exitCode=0 Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.300598 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqng8" event={"ID":"56cb5ce8-da4f-4c24-9805-18a91b316bcd","Type":"ContainerDied","Data":"8a3a4ecde2801a20f3bb4ccdc68bab1d46b831e5569a15eb1e5876330bbb7d42"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.314786 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.314831 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92" event={"ID":"93fd6b37-eee2-4fd5-aa18-51eecea65a3b","Type":"ContainerDied","Data":"1d4f6287093b36cb2637cb372b313eb6bd9562c5fa5d2aeb1ae5bac20a51d619"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.314910 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d4f6287093b36cb2637cb372b313eb6bd9562c5fa5d2aeb1ae5bac20a51d619" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.340513 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxfx\" (UniqueName: \"kubernetes.io/projected/f5255b75-6d10-40f0-9d11-c975458382cb-kube-api-access-hmxfx\") pod \"redhat-operators-nxbdr\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.345096 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"442fd418-e9e8-4cda-8e47-0a2780ae306d","Type":"ContainerStarted","Data":"b8cd20c1ed0f195e42077335953b04003fcd5a5b8e38705335ea3a93348af2c9"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.345141 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"442fd418-e9e8-4cda-8e47-0a2780ae306d","Type":"ContainerStarted","Data":"2d9181b27bd439ecb45b4b635aa0db9190165a3b423f5ddca9eedee27fed1520"} Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.393499 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.393478605 podStartE2EDuration="2.393478605s" podCreationTimestamp="2026-01-30 08:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:04.390970392 +0000 UTC m=+163.086517501" watchObservedRunningTime="2026-01-30 08:12:04.393478605 +0000 UTC m=+163.089025714" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.525552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.653143 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85lwg"] Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.756370 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:04 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:04 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:04 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:04 crc kubenswrapper[4870]: I0130 08:12:04.756472 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.039940 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxbdr"] Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.358237 4870 generic.go:334] "Generic (PLEG): container finished" podID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerID="f4984448372f3c99bd2eb627d2f6a37eee0cab48c315336c3d5192e15f6bb85e" exitCode=0 Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.358313 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerDied","Data":"f4984448372f3c99bd2eb627d2f6a37eee0cab48c315336c3d5192e15f6bb85e"} Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.358628 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerStarted","Data":"1b14874ab64bd9943b3954bf834f4ae30ab6a234601d5bd7fe08c6631f1c0819"} Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.361262 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerStarted","Data":"75dc4d4ca08c96b6af316cef86b49419d2a6ad7374d685b482b8ff2fed0aeb65"} Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.366936 4870 generic.go:334] "Generic (PLEG): container finished" podID="442fd418-e9e8-4cda-8e47-0a2780ae306d" containerID="b8cd20c1ed0f195e42077335953b04003fcd5a5b8e38705335ea3a93348af2c9" exitCode=0 Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.367111 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"442fd418-e9e8-4cda-8e47-0a2780ae306d","Type":"ContainerDied","Data":"b8cd20c1ed0f195e42077335953b04003fcd5a5b8e38705335ea3a93348af2c9"} Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.371464 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675","Type":"ContainerStarted","Data":"a6b7b751164e39de50a2a14218aad750e231a678e095709e7e9878cf8f73fa45"} Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.426485 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.42645124 podStartE2EDuration="2.42645124s" podCreationTimestamp="2026-01-30 08:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:05.424283346 +0000 UTC m=+164.119830465" watchObservedRunningTime="2026-01-30 08:12:05.42645124 +0000 UTC m=+164.121998349" Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.754277 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:05 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:05 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:05 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:05 crc kubenswrapper[4870]: I0130 08:12:05.754354 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.052131 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.401775 4870 generic.go:334] "Generic (PLEG): container finished" podID="f5255b75-6d10-40f0-9d11-c975458382cb" containerID="91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381" exitCode=0 Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.401834 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerDied","Data":"91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381"} Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.450443 4870 generic.go:334] "Generic (PLEG): container finished" podID="d0bcbce8-6f90-4ccb-b5b6-163a3dd53675" containerID="a6b7b751164e39de50a2a14218aad750e231a678e095709e7e9878cf8f73fa45" exitCode=0 Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.450977 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675","Type":"ContainerDied","Data":"a6b7b751164e39de50a2a14218aad750e231a678e095709e7e9878cf8f73fa45"} Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.755743 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:06 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:06 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:06 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.756217 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.936201 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.962435 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442fd418-e9e8-4cda-8e47-0a2780ae306d-kube-api-access\") pod \"442fd418-e9e8-4cda-8e47-0a2780ae306d\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.962565 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/442fd418-e9e8-4cda-8e47-0a2780ae306d-kubelet-dir\") pod \"442fd418-e9e8-4cda-8e47-0a2780ae306d\" (UID: \"442fd418-e9e8-4cda-8e47-0a2780ae306d\") " Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.962694 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/442fd418-e9e8-4cda-8e47-0a2780ae306d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "442fd418-e9e8-4cda-8e47-0a2780ae306d" (UID: "442fd418-e9e8-4cda-8e47-0a2780ae306d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.963010 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/442fd418-e9e8-4cda-8e47-0a2780ae306d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:06 crc kubenswrapper[4870]: I0130 08:12:06.989392 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442fd418-e9e8-4cda-8e47-0a2780ae306d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "442fd418-e9e8-4cda-8e47-0a2780ae306d" (UID: "442fd418-e9e8-4cda-8e47-0a2780ae306d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.069712 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442fd418-e9e8-4cda-8e47-0a2780ae306d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.485357 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.486339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"442fd418-e9e8-4cda-8e47-0a2780ae306d","Type":"ContainerDied","Data":"2d9181b27bd439ecb45b4b635aa0db9190165a3b423f5ddca9eedee27fed1520"} Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.486377 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d9181b27bd439ecb45b4b635aa0db9190165a3b423f5ddca9eedee27fed1520" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.757289 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:07 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:07 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:07 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.757356 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.969215 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.990223 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kube-api-access\") pod \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.990334 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kubelet-dir\") pod \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\" (UID: \"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675\") " Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.990462 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d0bcbce8-6f90-4ccb-b5b6-163a3dd53675" (UID: "d0bcbce8-6f90-4ccb-b5b6-163a3dd53675"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:12:07 crc kubenswrapper[4870]: I0130 08:12:07.990581 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:07.995901 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d0bcbce8-6f90-4ccb-b5b6-163a3dd53675" (UID: "d0bcbce8-6f90-4ccb-b5b6-163a3dd53675"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:08.092840 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0bcbce8-6f90-4ccb-b5b6-163a3dd53675-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:08.523593 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d0bcbce8-6f90-4ccb-b5b6-163a3dd53675","Type":"ContainerDied","Data":"738b24dab04ad1c07a253fcd86b532a3c395508eed7bde7827792ecb7b2579bd"} Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:08.524165 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="738b24dab04ad1c07a253fcd86b532a3c395508eed7bde7827792ecb7b2579bd" Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:08.523689 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:08.754125 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:08 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:08 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:08 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:08 crc kubenswrapper[4870]: I0130 08:12:08.754496 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:09 crc kubenswrapper[4870]: I0130 08:12:09.152573 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mnsdp" Jan 30 08:12:09 crc kubenswrapper[4870]: I0130 08:12:09.630630 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:12:09 crc kubenswrapper[4870]: I0130 08:12:09.657021 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b976744-b72d-4291-a32f-437fc1cfbf03-metrics-certs\") pod \"network-metrics-daemon-mp9vw\" (UID: \"7b976744-b72d-4291-a32f-437fc1cfbf03\") " pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:12:09 crc kubenswrapper[4870]: I0130 08:12:09.754915 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:09 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:09 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:09 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:09 crc kubenswrapper[4870]: I0130 08:12:09.755075 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:09 crc kubenswrapper[4870]: I0130 08:12:09.895220 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mp9vw" Jan 30 08:12:10 crc kubenswrapper[4870]: I0130 08:12:10.335729 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mp9vw"] Jan 30 08:12:10 crc kubenswrapper[4870]: I0130 08:12:10.580089 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" event={"ID":"7b976744-b72d-4291-a32f-437fc1cfbf03","Type":"ContainerStarted","Data":"e11e3efb615e509dfc9d07377a0b4baa70f7d89635ec493f7c7ad084d1c2a8bf"} Jan 30 08:12:10 crc kubenswrapper[4870]: I0130 08:12:10.754607 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:10 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Jan 30 08:12:10 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:10 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:10 crc kubenswrapper[4870]: I0130 08:12:10.754673 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:11 crc kubenswrapper[4870]: I0130 08:12:11.608266 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" event={"ID":"7b976744-b72d-4291-a32f-437fc1cfbf03","Type":"ContainerStarted","Data":"8e045380b35ca46680cb58a41fe659499348b8347b986bd2735ff522e36d555d"} Jan 30 08:12:11 crc kubenswrapper[4870]: I0130 08:12:11.754548 4870 patch_prober.go:28] interesting pod/router-default-5444994796-dfwzs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:12:11 crc kubenswrapper[4870]: [+]has-synced ok Jan 30 08:12:11 crc kubenswrapper[4870]: [+]process-running ok Jan 30 08:12:11 crc kubenswrapper[4870]: healthz check failed Jan 30 08:12:11 crc kubenswrapper[4870]: I0130 08:12:11.754680 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dfwzs" podUID="fe4278ad-53ec-4f7f-9c39-a00b6fa505c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:12:12 crc kubenswrapper[4870]: I0130 08:12:12.753928 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:12:12 crc kubenswrapper[4870]: I0130 08:12:12.757752 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dfwzs" Jan 30 08:12:13 crc kubenswrapper[4870]: I0130 08:12:13.632621 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:12:13 crc kubenswrapper[4870]: I0130 08:12:13.637221 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:12:13 crc kubenswrapper[4870]: I0130 08:12:13.803954 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-v7bvt" Jan 30 08:12:16 crc kubenswrapper[4870]: I0130 08:12:16.105496 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kkf4z"] Jan 30 08:12:16 crc kubenswrapper[4870]: I0130 08:12:16.106082 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" containerID="cri-o://a77c631c25ab71f0cef3c69513d9b0866e7e0d3305252072a16f62fc9dac93dd" gracePeriod=30 Jan 30 08:12:16 crc kubenswrapper[4870]: I0130 08:12:16.111596 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957"] Jan 30 08:12:16 crc kubenswrapper[4870]: I0130 08:12:16.111820 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" podUID="c488e93c-573d-4d04-a272-699af1059a0e" containerName="route-controller-manager" containerID="cri-o://1c707abb6d42fe8dbbf92f521b8a55ebafd7443ac15de36d0828dd259789e664" gracePeriod=30 Jan 30 08:12:17 crc kubenswrapper[4870]: I0130 08:12:17.664971 4870 generic.go:334] "Generic (PLEG): container finished" podID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerID="a77c631c25ab71f0cef3c69513d9b0866e7e0d3305252072a16f62fc9dac93dd" exitCode=0 Jan 30 08:12:17 crc kubenswrapper[4870]: I0130 08:12:17.665033 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" event={"ID":"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7","Type":"ContainerDied","Data":"a77c631c25ab71f0cef3c69513d9b0866e7e0d3305252072a16f62fc9dac93dd"} Jan 30 08:12:17 crc kubenswrapper[4870]: I0130 08:12:17.667302 4870 generic.go:334] "Generic (PLEG): container finished" podID="c488e93c-573d-4d04-a272-699af1059a0e" containerID="1c707abb6d42fe8dbbf92f521b8a55ebafd7443ac15de36d0828dd259789e664" exitCode=0 Jan 30 08:12:17 crc kubenswrapper[4870]: I0130 08:12:17.667356 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" event={"ID":"c488e93c-573d-4d04-a272-699af1059a0e","Type":"ContainerDied","Data":"1c707abb6d42fe8dbbf92f521b8a55ebafd7443ac15de36d0828dd259789e664"} Jan 30 08:12:19 crc kubenswrapper[4870]: I0130 08:12:19.120007 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:12:21 crc kubenswrapper[4870]: I0130 08:12:21.441033 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.384687 4870 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8p957 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.386402 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" podUID="c488e93c-573d-4d04-a272-699af1059a0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.661577 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.664678 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.679089 4870 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kkf4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.679191 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708056 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8596c5f99c-f2skc"] Jan 30 08:12:24 crc kubenswrapper[4870]: E0130 08:12:24.708283 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bcbce8-6f90-4ccb-b5b6-163a3dd53675" containerName="pruner" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708303 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bcbce8-6f90-4ccb-b5b6-163a3dd53675" containerName="pruner" Jan 30 08:12:24 crc kubenswrapper[4870]: E0130 08:12:24.708315 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c488e93c-573d-4d04-a272-699af1059a0e" containerName="route-controller-manager" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708322 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c488e93c-573d-4d04-a272-699af1059a0e" containerName="route-controller-manager" Jan 30 08:12:24 crc kubenswrapper[4870]: E0130 08:12:24.708339 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442fd418-e9e8-4cda-8e47-0a2780ae306d" containerName="pruner" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708347 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="442fd418-e9e8-4cda-8e47-0a2780ae306d" containerName="pruner" Jan 30 08:12:24 crc kubenswrapper[4870]: E0130 08:12:24.708361 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708368 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708483 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="442fd418-e9e8-4cda-8e47-0a2780ae306d" containerName="pruner" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708499 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bcbce8-6f90-4ccb-b5b6-163a3dd53675" containerName="pruner" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708508 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c488e93c-573d-4d04-a272-699af1059a0e" containerName="route-controller-manager" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708518 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" containerName="controller-manager" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.708856 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.712291 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" event={"ID":"c488e93c-573d-4d04-a272-699af1059a0e","Type":"ContainerDied","Data":"704010f76326b14b45acb49d52a3c39fd09423589bc0b99052ca69b69f06912c"} Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.712327 4870 scope.go:117] "RemoveContainer" containerID="1c707abb6d42fe8dbbf92f521b8a55ebafd7443ac15de36d0828dd259789e664" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.712412 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.721156 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" event={"ID":"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7","Type":"ContainerDied","Data":"c3d25320838bc55388c93ea63e175ab91cf4b33328f8715faa7380d5ef4ae27f"} Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.721305 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kkf4z" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.736950 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8596c5f99c-f2skc"] Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.782802 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-client-ca\") pod \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.782931 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frbv4\" (UniqueName: \"kubernetes.io/projected/c488e93c-573d-4d04-a272-699af1059a0e-kube-api-access-frbv4\") pod \"c488e93c-573d-4d04-a272-699af1059a0e\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783003 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx49z\" (UniqueName: \"kubernetes.io/projected/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-kube-api-access-nx49z\") pod \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783059 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-serving-cert\") pod \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783120 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-client-ca\") pod \"c488e93c-573d-4d04-a272-699af1059a0e\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783230 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-config\") pod \"c488e93c-573d-4d04-a272-699af1059a0e\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783268 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-config\") pod \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783318 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-proxy-ca-bundles\") pod \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\" (UID: \"3e75fb87-2eda-4658-a5dc-fb9424ed9cb7\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783366 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c488e93c-573d-4d04-a272-699af1059a0e-serving-cert\") pod \"c488e93c-573d-4d04-a272-699af1059a0e\" (UID: \"c488e93c-573d-4d04-a272-699af1059a0e\") " Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783583 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnks\" (UniqueName: \"kubernetes.io/projected/a17e1099-eed8-4519-af45-260df6408a0b-kube-api-access-2cnks\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783612 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c488e93c-573d-4d04-a272-699af1059a0e" (UID: "c488e93c-573d-4d04-a272-699af1059a0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783722 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17e1099-eed8-4519-af45-260df6408a0b-serving-cert\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783777 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-config\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.783945 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-proxy-ca-bundles\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.784100 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-client-ca\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.784377 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" (UID: "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.784101 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-config" (OuterVolumeSpecName: "config") pod "c488e93c-573d-4d04-a272-699af1059a0e" (UID: "c488e93c-573d-4d04-a272-699af1059a0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.784594 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.784474 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-config" (OuterVolumeSpecName: "config") pod "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" (UID: "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.784675 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" (UID: "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.789598 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c488e93c-573d-4d04-a272-699af1059a0e-kube-api-access-frbv4" (OuterVolumeSpecName: "kube-api-access-frbv4") pod "c488e93c-573d-4d04-a272-699af1059a0e" (UID: "c488e93c-573d-4d04-a272-699af1059a0e"). InnerVolumeSpecName "kube-api-access-frbv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.789602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" (UID: "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.789663 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c488e93c-573d-4d04-a272-699af1059a0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c488e93c-573d-4d04-a272-699af1059a0e" (UID: "c488e93c-573d-4d04-a272-699af1059a0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.791253 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-kube-api-access-nx49z" (OuterVolumeSpecName: "kube-api-access-nx49z") pod "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" (UID: "3e75fb87-2eda-4658-a5dc-fb9424ed9cb7"). InnerVolumeSpecName "kube-api-access-nx49z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886269 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnks\" (UniqueName: \"kubernetes.io/projected/a17e1099-eed8-4519-af45-260df6408a0b-kube-api-access-2cnks\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886358 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17e1099-eed8-4519-af45-260df6408a0b-serving-cert\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886394 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-config\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886434 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-proxy-ca-bundles\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886477 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-client-ca\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886545 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c488e93c-573d-4d04-a272-699af1059a0e-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886564 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886578 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886595 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c488e93c-573d-4d04-a272-699af1059a0e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886611 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886623 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frbv4\" (UniqueName: \"kubernetes.io/projected/c488e93c-573d-4d04-a272-699af1059a0e-kube-api-access-frbv4\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886635 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx49z\" (UniqueName: \"kubernetes.io/projected/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-kube-api-access-nx49z\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.886647 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.888054 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-client-ca\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.888135 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-proxy-ca-bundles\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.889409 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-config\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.895113 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17e1099-eed8-4519-af45-260df6408a0b-serving-cert\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:24 crc kubenswrapper[4870]: I0130 08:12:24.906078 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnks\" (UniqueName: \"kubernetes.io/projected/a17e1099-eed8-4519-af45-260df6408a0b-kube-api-access-2cnks\") pod \"controller-manager-8596c5f99c-f2skc\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.040323 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.070045 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957"] Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.080180 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8p957"] Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.086436 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kkf4z"] Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.091225 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kkf4z"] Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.250225 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:12:25 crc kubenswrapper[4870]: I0130 08:12:25.250736 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:12:26 crc kubenswrapper[4870]: I0130 08:12:26.081633 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e75fb87-2eda-4658-a5dc-fb9424ed9cb7" path="/var/lib/kubelet/pods/3e75fb87-2eda-4658-a5dc-fb9424ed9cb7/volumes" Jan 30 08:12:26 crc kubenswrapper[4870]: I0130 08:12:26.082627 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c488e93c-573d-4d04-a272-699af1059a0e" path="/var/lib/kubelet/pods/c488e93c-573d-4d04-a272-699af1059a0e/volumes" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.173849 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn"] Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.175024 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.178850 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.179530 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.180329 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.180336 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn"] Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.180485 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.182144 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.186662 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.224379 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-config\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.224503 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/5914070f-d811-4c53-962e-62e819772201-kube-api-access-276q6\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.224560 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-client-ca\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.224689 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5914070f-d811-4c53-962e-62e819772201-serving-cert\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.326381 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-config\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.326445 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/5914070f-d811-4c53-962e-62e819772201-kube-api-access-276q6\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.326491 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-client-ca\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.326600 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5914070f-d811-4c53-962e-62e819772201-serving-cert\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.327951 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-client-ca\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.330032 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-config\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.332476 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5914070f-d811-4c53-962e-62e819772201-serving-cert\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.348389 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/5914070f-d811-4c53-962e-62e819772201-kube-api-access-276q6\") pod \"route-controller-manager-6746697959-z2jtn\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:27 crc kubenswrapper[4870]: I0130 08:12:27.512216 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:33 crc kubenswrapper[4870]: E0130 08:12:33.908571 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 08:12:33 crc kubenswrapper[4870]: E0130 08:12:33.909652 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhq7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vm685_openshift-marketplace(abc41080-75c5-421f-baa8-f05792f74564): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:12:33 crc kubenswrapper[4870]: E0130 08:12:33.910827 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vm685" podUID="abc41080-75c5-421f-baa8-f05792f74564" Jan 30 08:12:33 crc kubenswrapper[4870]: E0130 08:12:33.951501 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 08:12:33 crc kubenswrapper[4870]: E0130 08:12:33.951665 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sz97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ddg46_openshift-marketplace(025ee8c8-8a97-4158-88fb-c4fa23f5c9c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:12:33 crc kubenswrapper[4870]: E0130 08:12:33.952857 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ddg46" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" Jan 30 08:12:34 crc kubenswrapper[4870]: I0130 08:12:34.087571 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-26vrg" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.592812 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ddg46" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.593153 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vm685" podUID="abc41080-75c5-421f-baa8-f05792f74564" Jan 30 08:12:35 crc kubenswrapper[4870]: I0130 08:12:35.647636 4870 scope.go:117] "RemoveContainer" containerID="a77c631c25ab71f0cef3c69513d9b0866e7e0d3305252072a16f62fc9dac93dd" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.727096 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.727858 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.729163 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44g22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cx2x5_openshift-marketplace(258d3e35-5580-4108-889c-9d5d2f80c810): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.729301 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg24l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rk4lj_openshift-marketplace(ba2950a4-e1b9-45a9-9980-1b4169e0fb16): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.731583 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cx2x5" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.731694 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rk4lj" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.757670 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.758276 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wlfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sdlrf_openshift-marketplace(e02d35f8-2e8c-47a3-87c9-9580ab766290): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.759620 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sdlrf" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.814338 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cx2x5" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.814339 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rk4lj" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" Jan 30 08:12:35 crc kubenswrapper[4870]: E0130 08:12:35.814390 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sdlrf" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" Jan 30 08:12:35 crc kubenswrapper[4870]: I0130 08:12:35.923042 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8596c5f99c-f2skc"] Jan 30 08:12:35 crc kubenswrapper[4870]: W0130 08:12:35.939034 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17e1099_eed8_4519_af45_260df6408a0b.slice/crio-c3a5d53e77564e84f7b91da983d0401bd24c456bb36029d2060aa1e9d103cd82 WatchSource:0}: Error finding container c3a5d53e77564e84f7b91da983d0401bd24c456bb36029d2060aa1e9d103cd82: Status 404 returned error can't find the container with id c3a5d53e77564e84f7b91da983d0401bd24c456bb36029d2060aa1e9d103cd82 Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.048572 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn"] Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.106507 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8596c5f99c-f2skc"] Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.207750 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn"] Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.804688 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" event={"ID":"a17e1099-eed8-4519-af45-260df6408a0b","Type":"ContainerStarted","Data":"c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.804973 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" event={"ID":"a17e1099-eed8-4519-af45-260df6408a0b","Type":"ContainerStarted","Data":"c3a5d53e77564e84f7b91da983d0401bd24c456bb36029d2060aa1e9d103cd82"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.805074 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" podUID="a17e1099-eed8-4519-af45-260df6408a0b" containerName="controller-manager" containerID="cri-o://c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab" gracePeriod=30 Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.805711 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.808861 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mp9vw" event={"ID":"7b976744-b72d-4291-a32f-437fc1cfbf03","Type":"ContainerStarted","Data":"e62f5616c575397527b0b07778b565294e2dac939498e9fd49ba103ed954c034"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.810569 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.816043 4870 generic.go:334] "Generic (PLEG): container finished" podID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerID="6734abf7e123160f7f9ec15e63bcacb2803b7e9b5f597cb9ce9439f6abad0e28" exitCode=0 Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.816230 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqng8" event={"ID":"56cb5ce8-da4f-4c24-9805-18a91b316bcd","Type":"ContainerDied","Data":"6734abf7e123160f7f9ec15e63bcacb2803b7e9b5f597cb9ce9439f6abad0e28"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.821712 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerStarted","Data":"d0dc443c5c9b20693d4448270af7993e64d959e03cda7b880c0de95b2ee5007b"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.824392 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerStarted","Data":"72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.826010 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" event={"ID":"5914070f-d811-4c53-962e-62e819772201","Type":"ContainerStarted","Data":"922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.826037 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" event={"ID":"5914070f-d811-4c53-962e-62e819772201","Type":"ContainerStarted","Data":"27e7288fbc9550c8c94128f62c3018e54a3ae3ca4153343d75341c7c9ed7ce95"} Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.826093 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.826058 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" podUID="5914070f-d811-4c53-962e-62e819772201" containerName="route-controller-manager" containerID="cri-o://922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f" gracePeriod=30 Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.833145 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" podStartSLOduration=20.833124277 podStartE2EDuration="20.833124277s" podCreationTimestamp="2026-01-30 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:36.828903805 +0000 UTC m=+195.524450914" watchObservedRunningTime="2026-01-30 08:12:36.833124277 +0000 UTC m=+195.528671386" Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.837757 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.866958 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mp9vw" podStartSLOduration=169.866928239 podStartE2EDuration="2m49.866928239s" podCreationTimestamp="2026-01-30 08:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:36.866903528 +0000 UTC m=+195.562450637" watchObservedRunningTime="2026-01-30 08:12:36.866928239 +0000 UTC m=+195.562475348" Jan 30 08:12:36 crc kubenswrapper[4870]: I0130 08:12:36.933508 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" podStartSLOduration=20.933483852 podStartE2EDuration="20.933483852s" podCreationTimestamp="2026-01-30 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:36.929222439 +0000 UTC m=+195.624769558" watchObservedRunningTime="2026-01-30 08:12:36.933483852 +0000 UTC m=+195.629030961" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.222758 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.233428 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.259600 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c6648756f-h84rs"] Jan 30 08:12:37 crc kubenswrapper[4870]: E0130 08:12:37.259918 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17e1099-eed8-4519-af45-260df6408a0b" containerName="controller-manager" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.259943 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17e1099-eed8-4519-af45-260df6408a0b" containerName="controller-manager" Jan 30 08:12:37 crc kubenswrapper[4870]: E0130 08:12:37.259972 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5914070f-d811-4c53-962e-62e819772201" containerName="route-controller-manager" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.259982 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5914070f-d811-4c53-962e-62e819772201" containerName="route-controller-manager" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.260116 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5914070f-d811-4c53-962e-62e819772201" containerName="route-controller-manager" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.260135 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17e1099-eed8-4519-af45-260df6408a0b" containerName="controller-manager" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.260597 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.282725 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c6648756f-h84rs"] Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289564 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-client-ca\") pod \"a17e1099-eed8-4519-af45-260df6408a0b\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289646 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-proxy-ca-bundles\") pod \"a17e1099-eed8-4519-af45-260df6408a0b\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289674 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-config\") pod \"5914070f-d811-4c53-962e-62e819772201\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289703 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/5914070f-d811-4c53-962e-62e819772201-kube-api-access-276q6\") pod \"5914070f-d811-4c53-962e-62e819772201\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289732 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-config\") pod \"a17e1099-eed8-4519-af45-260df6408a0b\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289794 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnks\" (UniqueName: \"kubernetes.io/projected/a17e1099-eed8-4519-af45-260df6408a0b-kube-api-access-2cnks\") pod \"a17e1099-eed8-4519-af45-260df6408a0b\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289813 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17e1099-eed8-4519-af45-260df6408a0b-serving-cert\") pod \"a17e1099-eed8-4519-af45-260df6408a0b\" (UID: \"a17e1099-eed8-4519-af45-260df6408a0b\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289837 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-client-ca\") pod \"5914070f-d811-4c53-962e-62e819772201\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.289866 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5914070f-d811-4c53-962e-62e819772201-serving-cert\") pod \"5914070f-d811-4c53-962e-62e819772201\" (UID: \"5914070f-d811-4c53-962e-62e819772201\") " Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.290581 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-client-ca" (OuterVolumeSpecName: "client-ca") pod "a17e1099-eed8-4519-af45-260df6408a0b" (UID: "a17e1099-eed8-4519-af45-260df6408a0b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.291367 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-config" (OuterVolumeSpecName: "config") pod "a17e1099-eed8-4519-af45-260df6408a0b" (UID: "a17e1099-eed8-4519-af45-260df6408a0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.291589 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-client-ca" (OuterVolumeSpecName: "client-ca") pod "5914070f-d811-4c53-962e-62e819772201" (UID: "5914070f-d811-4c53-962e-62e819772201"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.291763 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-config" (OuterVolumeSpecName: "config") pod "5914070f-d811-4c53-962e-62e819772201" (UID: "5914070f-d811-4c53-962e-62e819772201"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.292176 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a17e1099-eed8-4519-af45-260df6408a0b" (UID: "a17e1099-eed8-4519-af45-260df6408a0b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.303049 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a17e1099-eed8-4519-af45-260df6408a0b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a17e1099-eed8-4519-af45-260df6408a0b" (UID: "a17e1099-eed8-4519-af45-260df6408a0b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.303218 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17e1099-eed8-4519-af45-260df6408a0b-kube-api-access-2cnks" (OuterVolumeSpecName: "kube-api-access-2cnks") pod "a17e1099-eed8-4519-af45-260df6408a0b" (UID: "a17e1099-eed8-4519-af45-260df6408a0b"). InnerVolumeSpecName "kube-api-access-2cnks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.303285 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5914070f-d811-4c53-962e-62e819772201-kube-api-access-276q6" (OuterVolumeSpecName: "kube-api-access-276q6") pod "5914070f-d811-4c53-962e-62e819772201" (UID: "5914070f-d811-4c53-962e-62e819772201"). InnerVolumeSpecName "kube-api-access-276q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.304054 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5914070f-d811-4c53-962e-62e819772201-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5914070f-d811-4c53-962e-62e819772201" (UID: "5914070f-d811-4c53-962e-62e819772201"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.391833 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-config\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392004 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-serving-cert\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392045 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-proxy-ca-bundles\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392106 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncp7p\" (UniqueName: \"kubernetes.io/projected/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-kube-api-access-ncp7p\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392293 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-client-ca\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392536 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cnks\" (UniqueName: \"kubernetes.io/projected/a17e1099-eed8-4519-af45-260df6408a0b-kube-api-access-2cnks\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392557 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a17e1099-eed8-4519-af45-260df6408a0b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392574 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392585 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5914070f-d811-4c53-962e-62e819772201-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392601 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392611 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392622 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5914070f-d811-4c53-962e-62e819772201-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392631 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/5914070f-d811-4c53-962e-62e819772201-kube-api-access-276q6\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.392643 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17e1099-eed8-4519-af45-260df6408a0b-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.494236 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-config\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.495277 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-serving-cert\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.495359 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-proxy-ca-bundles\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.495571 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncp7p\" (UniqueName: \"kubernetes.io/projected/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-kube-api-access-ncp7p\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.495696 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-client-ca\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.496808 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-config\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.497053 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-proxy-ca-bundles\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.497120 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-client-ca\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.501517 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-serving-cert\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.520855 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncp7p\" (UniqueName: \"kubernetes.io/projected/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-kube-api-access-ncp7p\") pod \"controller-manager-7c6648756f-h84rs\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.629354 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.834159 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqng8" event={"ID":"56cb5ce8-da4f-4c24-9805-18a91b316bcd","Type":"ContainerStarted","Data":"5086a69b5f8df7175222c9a53597ceeaa092692fca7dd2ea0dc59c15c50cec17"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.837226 4870 generic.go:334] "Generic (PLEG): container finished" podID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerID="d0dc443c5c9b20693d4448270af7993e64d959e03cda7b880c0de95b2ee5007b" exitCode=0 Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.837339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerDied","Data":"d0dc443c5c9b20693d4448270af7993e64d959e03cda7b880c0de95b2ee5007b"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.840185 4870 generic.go:334] "Generic (PLEG): container finished" podID="f5255b75-6d10-40f0-9d11-c975458382cb" containerID="72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36" exitCode=0 Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.840270 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerDied","Data":"72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.845083 4870 generic.go:334] "Generic (PLEG): container finished" podID="5914070f-d811-4c53-962e-62e819772201" containerID="922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f" exitCode=0 Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.845196 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" event={"ID":"5914070f-d811-4c53-962e-62e819772201","Type":"ContainerDied","Data":"922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.845207 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.845233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn" event={"ID":"5914070f-d811-4c53-962e-62e819772201","Type":"ContainerDied","Data":"27e7288fbc9550c8c94128f62c3018e54a3ae3ca4153343d75341c7c9ed7ce95"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.845261 4870 scope.go:117] "RemoveContainer" containerID="922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.852179 4870 generic.go:334] "Generic (PLEG): container finished" podID="a17e1099-eed8-4519-af45-260df6408a0b" containerID="c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab" exitCode=0 Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.852377 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.852454 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" event={"ID":"a17e1099-eed8-4519-af45-260df6408a0b","Type":"ContainerDied","Data":"c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.852538 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8596c5f99c-f2skc" event={"ID":"a17e1099-eed8-4519-af45-260df6408a0b","Type":"ContainerDied","Data":"c3a5d53e77564e84f7b91da983d0401bd24c456bb36029d2060aa1e9d103cd82"} Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.861062 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jqng8" podStartSLOduration=2.956286441 podStartE2EDuration="35.861020893s" podCreationTimestamp="2026-01-30 08:12:02 +0000 UTC" firstStartedPulling="2026-01-30 08:12:04.31392498 +0000 UTC m=+163.009472089" lastFinishedPulling="2026-01-30 08:12:37.218659432 +0000 UTC m=+195.914206541" observedRunningTime="2026-01-30 08:12:37.860104165 +0000 UTC m=+196.555651274" watchObservedRunningTime="2026-01-30 08:12:37.861020893 +0000 UTC m=+196.556568002" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.892154 4870 scope.go:117] "RemoveContainer" containerID="922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f" Jan 30 08:12:37 crc kubenswrapper[4870]: E0130 08:12:37.892777 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f\": container with ID starting with 922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f not found: ID does not exist" containerID="922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.892810 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f"} err="failed to get container status \"922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f\": rpc error: code = NotFound desc = could not find container \"922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f\": container with ID starting with 922a1b15ac187a2b816c2cea61c9a2c801e0c9ab3229f0eacf89f9c59354543f not found: ID does not exist" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.892855 4870 scope.go:117] "RemoveContainer" containerID="c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.921045 4870 scope.go:117] "RemoveContainer" containerID="c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab" Jan 30 08:12:37 crc kubenswrapper[4870]: E0130 08:12:37.922282 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab\": container with ID starting with c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab not found: ID does not exist" containerID="c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.922375 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab"} err="failed to get container status \"c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab\": rpc error: code = NotFound desc = could not find container \"c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab\": container with ID starting with c68b026edcf68806f3dc17b372ccdee94fa2aa4f5458d1613f492f6076a996ab not found: ID does not exist" Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.936256 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8596c5f99c-f2skc"] Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.943113 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8596c5f99c-f2skc"] Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.948647 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn"] Jan 30 08:12:37 crc kubenswrapper[4870]: I0130 08:12:37.953714 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6746697959-z2jtn"] Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.057957 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c6648756f-h84rs"] Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.083562 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5914070f-d811-4c53-962e-62e819772201" path="/var/lib/kubelet/pods/5914070f-d811-4c53-962e-62e819772201/volumes" Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.084141 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17e1099-eed8-4519-af45-260df6408a0b" path="/var/lib/kubelet/pods/a17e1099-eed8-4519-af45-260df6408a0b/volumes" Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.862518 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" event={"ID":"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05","Type":"ContainerStarted","Data":"df06edc5cedd10f1cf6063d24bbddba4264e6ef76993b8981ce5385c15ccb756"} Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.863120 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" event={"ID":"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05","Type":"ContainerStarted","Data":"29687b205e4bfe58a29a1a38844f13e1a74004c67624566a0e52141eca348418"} Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.864580 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.868404 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerStarted","Data":"bbc787722aa3ad5d86b9358f4c93fb7295e3213baf6f9aa990b2876df2f315f2"} Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.871614 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerStarted","Data":"c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69"} Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.872568 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.884785 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" podStartSLOduration=2.884758531 podStartE2EDuration="2.884758531s" podCreationTimestamp="2026-01-30 08:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:38.881049825 +0000 UTC m=+197.576596934" watchObservedRunningTime="2026-01-30 08:12:38.884758531 +0000 UTC m=+197.580305650" Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.899176 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nxbdr" podStartSLOduration=2.981041084 podStartE2EDuration="34.899156432s" podCreationTimestamp="2026-01-30 08:12:04 +0000 UTC" firstStartedPulling="2026-01-30 08:12:06.421239209 +0000 UTC m=+165.116786318" lastFinishedPulling="2026-01-30 08:12:38.339354557 +0000 UTC m=+197.034901666" observedRunningTime="2026-01-30 08:12:38.898273254 +0000 UTC m=+197.593820373" watchObservedRunningTime="2026-01-30 08:12:38.899156432 +0000 UTC m=+197.594703541" Jan 30 08:12:38 crc kubenswrapper[4870]: I0130 08:12:38.914459 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-85lwg" podStartSLOduration=2.94960082 podStartE2EDuration="35.914439329s" podCreationTimestamp="2026-01-30 08:12:03 +0000 UTC" firstStartedPulling="2026-01-30 08:12:05.369357697 +0000 UTC m=+164.064904806" lastFinishedPulling="2026-01-30 08:12:38.334196206 +0000 UTC m=+197.029743315" observedRunningTime="2026-01-30 08:12:38.913605503 +0000 UTC m=+197.609152622" watchObservedRunningTime="2026-01-30 08:12:38.914439329 +0000 UTC m=+197.609986438" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.185082 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h"] Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.185939 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.189229 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.189388 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.189530 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.189736 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.189946 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.191189 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.247738 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-serving-cert\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.247807 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-config\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.247835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-client-ca\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.247859 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4pz\" (UniqueName: \"kubernetes.io/projected/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-kube-api-access-jh4pz\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.300117 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h"] Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.348634 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-client-ca\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.348705 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4pz\" (UniqueName: \"kubernetes.io/projected/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-kube-api-access-jh4pz\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.348774 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-serving-cert\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.348810 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-config\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.350060 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-client-ca\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.350204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-config\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.366745 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-serving-cert\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.369762 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4pz\" (UniqueName: \"kubernetes.io/projected/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-kube-api-access-jh4pz\") pod \"route-controller-manager-9f547cf58-vv68h\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.507212 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:40 crc kubenswrapper[4870]: I0130 08:12:40.966105 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h"] Jan 30 08:12:41 crc kubenswrapper[4870]: I0130 08:12:41.619511 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xxrkx"] Jan 30 08:12:41 crc kubenswrapper[4870]: I0130 08:12:41.885276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" event={"ID":"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef","Type":"ContainerStarted","Data":"c5fa843ed751475d23b4cc7c99550ca1b426459de45a94b59e3341fcd829105d"} Jan 30 08:12:41 crc kubenswrapper[4870]: I0130 08:12:41.885627 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" event={"ID":"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef","Type":"ContainerStarted","Data":"82eb7cd80398c1cdbf2869d9e797591e5a6a4357ecdd3aada431258c621248cb"} Jan 30 08:12:41 crc kubenswrapper[4870]: I0130 08:12:41.886735 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:41 crc kubenswrapper[4870]: I0130 08:12:41.898175 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:41 crc kubenswrapper[4870]: I0130 08:12:41.912424 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" podStartSLOduration=5.91240329 podStartE2EDuration="5.91240329s" podCreationTimestamp="2026-01-30 08:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:41.908436646 +0000 UTC m=+200.603983755" watchObservedRunningTime="2026-01-30 08:12:41.91240329 +0000 UTC m=+200.607950399" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.014156 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.014998 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.017389 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.019159 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.038159 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.070040 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc383a2-011d-40cd-95b9-5c1f97710135-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.070114 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efc383a2-011d-40cd-95b9-5c1f97710135-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.171411 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efc383a2-011d-40cd-95b9-5c1f97710135-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.171492 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc383a2-011d-40cd-95b9-5c1f97710135-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.171567 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efc383a2-011d-40cd-95b9-5c1f97710135-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.191003 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc383a2-011d-40cd-95b9-5c1f97710135-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.332747 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.668261 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.668685 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.787203 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:12:42 crc kubenswrapper[4870]: W0130 08:12:42.793109 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podefc383a2_011d_40cd_95b9_5c1f97710135.slice/crio-2c5c61c58732b27312a9b5633b48aec98bed1b8b0039f321de6adb67529a94a2 WatchSource:0}: Error finding container 2c5c61c58732b27312a9b5633b48aec98bed1b8b0039f321de6adb67529a94a2: Status 404 returned error can't find the container with id 2c5c61c58732b27312a9b5633b48aec98bed1b8b0039f321de6adb67529a94a2 Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.847301 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.897623 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"efc383a2-011d-40cd-95b9-5c1f97710135","Type":"ContainerStarted","Data":"2c5c61c58732b27312a9b5633b48aec98bed1b8b0039f321de6adb67529a94a2"} Jan 30 08:12:42 crc kubenswrapper[4870]: I0130 08:12:42.956254 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:12:43 crc kubenswrapper[4870]: I0130 08:12:43.905129 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"efc383a2-011d-40cd-95b9-5c1f97710135","Type":"ContainerStarted","Data":"0c91a5504b707e3d6da8d033947fad87369d660994ee2008baa53a00db707413"} Jan 30 08:12:43 crc kubenswrapper[4870]: I0130 08:12:43.922580 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.922561057 podStartE2EDuration="1.922561057s" podCreationTimestamp="2026-01-30 08:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:43.921284968 +0000 UTC m=+202.616832077" watchObservedRunningTime="2026-01-30 08:12:43.922561057 +0000 UTC m=+202.618108176" Jan 30 08:12:44 crc kubenswrapper[4870]: I0130 08:12:44.126112 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:44 crc kubenswrapper[4870]: I0130 08:12:44.126166 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:44 crc kubenswrapper[4870]: I0130 08:12:44.526345 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:44 crc kubenswrapper[4870]: I0130 08:12:44.526802 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:44 crc kubenswrapper[4870]: I0130 08:12:44.915293 4870 generic.go:334] "Generic (PLEG): container finished" podID="efc383a2-011d-40cd-95b9-5c1f97710135" containerID="0c91a5504b707e3d6da8d033947fad87369d660994ee2008baa53a00db707413" exitCode=0 Jan 30 08:12:44 crc kubenswrapper[4870]: I0130 08:12:44.916043 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"efc383a2-011d-40cd-95b9-5c1f97710135","Type":"ContainerDied","Data":"0c91a5504b707e3d6da8d033947fad87369d660994ee2008baa53a00db707413"} Jan 30 08:12:45 crc kubenswrapper[4870]: I0130 08:12:45.169686 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85lwg" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="registry-server" probeResult="failure" output=< Jan 30 08:12:45 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:12:45 crc kubenswrapper[4870]: > Jan 30 08:12:45 crc kubenswrapper[4870]: I0130 08:12:45.580830 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nxbdr" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="registry-server" probeResult="failure" output=< Jan 30 08:12:45 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:12:45 crc kubenswrapper[4870]: > Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.248200 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.330306 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efc383a2-011d-40cd-95b9-5c1f97710135-kubelet-dir\") pod \"efc383a2-011d-40cd-95b9-5c1f97710135\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.330440 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efc383a2-011d-40cd-95b9-5c1f97710135-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "efc383a2-011d-40cd-95b9-5c1f97710135" (UID: "efc383a2-011d-40cd-95b9-5c1f97710135"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.330465 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc383a2-011d-40cd-95b9-5c1f97710135-kube-api-access\") pod \"efc383a2-011d-40cd-95b9-5c1f97710135\" (UID: \"efc383a2-011d-40cd-95b9-5c1f97710135\") " Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.330724 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efc383a2-011d-40cd-95b9-5c1f97710135-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.337508 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc383a2-011d-40cd-95b9-5c1f97710135-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "efc383a2-011d-40cd-95b9-5c1f97710135" (UID: "efc383a2-011d-40cd-95b9-5c1f97710135"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.432209 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc383a2-011d-40cd-95b9-5c1f97710135-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.927287 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"efc383a2-011d-40cd-95b9-5c1f97710135","Type":"ContainerDied","Data":"2c5c61c58732b27312a9b5633b48aec98bed1b8b0039f321de6adb67529a94a2"} Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.927327 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5c61c58732b27312a9b5633b48aec98bed1b8b0039f321de6adb67529a94a2" Jan 30 08:12:46 crc kubenswrapper[4870]: I0130 08:12:46.927396 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.413555 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:12:49 crc kubenswrapper[4870]: E0130 08:12:49.414045 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc383a2-011d-40cd-95b9-5c1f97710135" containerName="pruner" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.414056 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc383a2-011d-40cd-95b9-5c1f97710135" containerName="pruner" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.414205 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc383a2-011d-40cd-95b9-5c1f97710135" containerName="pruner" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.414593 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.416521 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.416782 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.425829 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.470059 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-var-lock\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.470124 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.470177 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kube-api-access\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.571660 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-var-lock\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.571768 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.571841 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-var-lock\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.571857 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kube-api-access\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.571965 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.590323 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kube-api-access\") pod \"installer-9-crc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:49 crc kubenswrapper[4870]: I0130 08:12:49.739905 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:12:50 crc kubenswrapper[4870]: I0130 08:12:50.138334 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:12:51 crc kubenswrapper[4870]: I0130 08:12:50.999565 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc","Type":"ContainerStarted","Data":"870ae255fc8aa69089480c5b4f44f2d48029e57db6c300a41e2ada010df31423"} Jan 30 08:12:51 crc kubenswrapper[4870]: I0130 08:12:51.000098 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc","Type":"ContainerStarted","Data":"7bf7aacf1a3cb5782a5f9385d5b6312bd1fa309375e7e58df111c48bf3bdf731"} Jan 30 08:12:51 crc kubenswrapper[4870]: I0130 08:12:51.024905 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.024884951 podStartE2EDuration="2.024884951s" podCreationTimestamp="2026-01-30 08:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:51.013605359 +0000 UTC m=+209.709152478" watchObservedRunningTime="2026-01-30 08:12:51.024884951 +0000 UTC m=+209.720432060" Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.023813 4870 generic.go:334] "Generic (PLEG): container finished" podID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerID="4e94e4129ecab37de0297dde4dc86e9ac30e8fda6a11f59af65a8c199b125d87" exitCode=0 Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.023903 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4lj" event={"ID":"ba2950a4-e1b9-45a9-9980-1b4169e0fb16","Type":"ContainerDied","Data":"4e94e4129ecab37de0297dde4dc86e9ac30e8fda6a11f59af65a8c199b125d87"} Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.029342 4870 generic.go:334] "Generic (PLEG): container finished" podID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerID="4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4" exitCode=0 Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.029438 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddg46" event={"ID":"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2","Type":"ContainerDied","Data":"4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4"} Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.035163 4870 generic.go:334] "Generic (PLEG): container finished" podID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerID="05623975ae7b461659818593078417aaae1d2eaf3b57481a7bebadeb7853e38b" exitCode=0 Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.035239 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdlrf" event={"ID":"e02d35f8-2e8c-47a3-87c9-9580ab766290","Type":"ContainerDied","Data":"05623975ae7b461659818593078417aaae1d2eaf3b57481a7bebadeb7853e38b"} Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.039209 4870 generic.go:334] "Generic (PLEG): container finished" podID="abc41080-75c5-421f-baa8-f05792f74564" containerID="a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028" exitCode=0 Jan 30 08:12:52 crc kubenswrapper[4870]: I0130 08:12:52.040041 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm685" event={"ID":"abc41080-75c5-421f-baa8-f05792f74564","Type":"ContainerDied","Data":"a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028"} Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.048619 4870 generic.go:334] "Generic (PLEG): container finished" podID="258d3e35-5580-4108-889c-9d5d2f80c810" containerID="31bdc406d04a8518a48f85291f438714500a3199ef4565a4e1bcc218ea393cac" exitCode=0 Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.048709 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx2x5" event={"ID":"258d3e35-5580-4108-889c-9d5d2f80c810","Type":"ContainerDied","Data":"31bdc406d04a8518a48f85291f438714500a3199ef4565a4e1bcc218ea393cac"} Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.053973 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddg46" event={"ID":"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2","Type":"ContainerStarted","Data":"7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70"} Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.057189 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdlrf" event={"ID":"e02d35f8-2e8c-47a3-87c9-9580ab766290","Type":"ContainerStarted","Data":"0ab86db3891506c1e61fe197330cdb4401ed14cf6adc50c8af9e3081bf1ba9b5"} Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.062988 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm685" event={"ID":"abc41080-75c5-421f-baa8-f05792f74564","Type":"ContainerStarted","Data":"fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344"} Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.070898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4lj" event={"ID":"ba2950a4-e1b9-45a9-9980-1b4169e0fb16","Type":"ContainerStarted","Data":"9c925d71b4dfdc55925a74993dfa3447a8c069656a12a2daf0cbfcade11ab1ed"} Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.113738 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sdlrf" podStartSLOduration=2.441989311 podStartE2EDuration="53.1137093s" podCreationTimestamp="2026-01-30 08:12:00 +0000 UTC" firstStartedPulling="2026-01-30 08:12:02.143477482 +0000 UTC m=+160.839024591" lastFinishedPulling="2026-01-30 08:12:52.815197471 +0000 UTC m=+211.510744580" observedRunningTime="2026-01-30 08:12:53.104204913 +0000 UTC m=+211.799752022" watchObservedRunningTime="2026-01-30 08:12:53.1137093 +0000 UTC m=+211.809256409" Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.132783 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ddg46" podStartSLOduration=3.010321136 podStartE2EDuration="51.132768776s" podCreationTimestamp="2026-01-30 08:12:02 +0000 UTC" firstStartedPulling="2026-01-30 08:12:04.31424424 +0000 UTC m=+163.009791349" lastFinishedPulling="2026-01-30 08:12:52.43669187 +0000 UTC m=+211.132238989" observedRunningTime="2026-01-30 08:12:53.128916826 +0000 UTC m=+211.824463935" watchObservedRunningTime="2026-01-30 08:12:53.132768776 +0000 UTC m=+211.828315875" Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.152485 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vm685" podStartSLOduration=2.813658786 podStartE2EDuration="53.152469972s" podCreationTimestamp="2026-01-30 08:12:00 +0000 UTC" firstStartedPulling="2026-01-30 08:12:02.150362635 +0000 UTC m=+160.845909734" lastFinishedPulling="2026-01-30 08:12:52.489173811 +0000 UTC m=+211.184720920" observedRunningTime="2026-01-30 08:12:53.149442368 +0000 UTC m=+211.844989477" watchObservedRunningTime="2026-01-30 08:12:53.152469972 +0000 UTC m=+211.848017071" Jan 30 08:12:53 crc kubenswrapper[4870]: I0130 08:12:53.171721 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rk4lj" podStartSLOduration=2.830874149 podStartE2EDuration="53.171703634s" podCreationTimestamp="2026-01-30 08:12:00 +0000 UTC" firstStartedPulling="2026-01-30 08:12:02.153527678 +0000 UTC m=+160.849074797" lastFinishedPulling="2026-01-30 08:12:52.494357163 +0000 UTC m=+211.189904282" observedRunningTime="2026-01-30 08:12:53.170348082 +0000 UTC m=+211.865895191" watchObservedRunningTime="2026-01-30 08:12:53.171703634 +0000 UTC m=+211.867250743" Jan 30 08:12:54 crc kubenswrapper[4870]: I0130 08:12:54.080583 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx2x5" event={"ID":"258d3e35-5580-4108-889c-9d5d2f80c810","Type":"ContainerStarted","Data":"5d7f1da1a59f0ee841deb45c6681be28192aa5d1b0765a1de1cc229f89986ccd"} Jan 30 08:12:54 crc kubenswrapper[4870]: I0130 08:12:54.196694 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:54 crc kubenswrapper[4870]: I0130 08:12:54.229997 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cx2x5" podStartSLOduration=2.7343042 podStartE2EDuration="54.229955331s" podCreationTimestamp="2026-01-30 08:12:00 +0000 UTC" firstStartedPulling="2026-01-30 08:12:02.163299296 +0000 UTC m=+160.858846395" lastFinishedPulling="2026-01-30 08:12:53.658950417 +0000 UTC m=+212.354497526" observedRunningTime="2026-01-30 08:12:54.109847674 +0000 UTC m=+212.805394783" watchObservedRunningTime="2026-01-30 08:12:54.229955331 +0000 UTC m=+212.925502440" Jan 30 08:12:54 crc kubenswrapper[4870]: I0130 08:12:54.251101 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:12:54 crc kubenswrapper[4870]: I0130 08:12:54.587783 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:54 crc kubenswrapper[4870]: I0130 08:12:54.640154 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:55 crc kubenswrapper[4870]: I0130 08:12:55.249746 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:12:55 crc kubenswrapper[4870]: I0130 08:12:55.249818 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:12:55 crc kubenswrapper[4870]: I0130 08:12:55.249895 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:12:55 crc kubenswrapper[4870]: I0130 08:12:55.250666 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:12:55 crc kubenswrapper[4870]: I0130 08:12:55.250744 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7" gracePeriod=600 Jan 30 08:12:56 crc kubenswrapper[4870]: I0130 08:12:56.064964 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c6648756f-h84rs"] Jan 30 08:12:56 crc kubenswrapper[4870]: I0130 08:12:56.065575 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" podUID="2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" containerName="controller-manager" containerID="cri-o://df06edc5cedd10f1cf6063d24bbddba4264e6ef76993b8981ce5385c15ccb756" gracePeriod=30 Jan 30 08:12:56 crc kubenswrapper[4870]: I0130 08:12:56.088947 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h"] Jan 30 08:12:56 crc kubenswrapper[4870]: I0130 08:12:56.089178 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" podUID="9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" containerName="route-controller-manager" containerID="cri-o://c5fa843ed751475d23b4cc7c99550ca1b426459de45a94b59e3341fcd829105d" gracePeriod=30 Jan 30 08:12:56 crc kubenswrapper[4870]: I0130 08:12:56.107283 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7" exitCode=0 Jan 30 08:12:56 crc kubenswrapper[4870]: I0130 08:12:56.107342 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7"} Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.115256 4870 generic.go:334] "Generic (PLEG): container finished" podID="2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" containerID="df06edc5cedd10f1cf6063d24bbddba4264e6ef76993b8981ce5385c15ccb756" exitCode=0 Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.115368 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" event={"ID":"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05","Type":"ContainerDied","Data":"df06edc5cedd10f1cf6063d24bbddba4264e6ef76993b8981ce5385c15ccb756"} Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.118815 4870 generic.go:334] "Generic (PLEG): container finished" podID="9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" containerID="c5fa843ed751475d23b4cc7c99550ca1b426459de45a94b59e3341fcd829105d" exitCode=0 Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.118905 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" event={"ID":"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef","Type":"ContainerDied","Data":"c5fa843ed751475d23b4cc7c99550ca1b426459de45a94b59e3341fcd829105d"} Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.123815 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"a02ea1737ed88e21ae8883a3a6a22392b0695152f4ace29771521a0445381b12"} Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.415968 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.461749 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt"] Jan 30 08:12:57 crc kubenswrapper[4870]: E0130 08:12:57.462015 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" containerName="route-controller-manager" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.462036 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" containerName="route-controller-manager" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.462185 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" containerName="route-controller-manager" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.463995 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.475378 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt"] Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.489826 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-serving-cert\") pod \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.489867 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-config\") pod \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.489956 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh4pz\" (UniqueName: \"kubernetes.io/projected/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-kube-api-access-jh4pz\") pod \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.490031 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-client-ca\") pod \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\" (UID: \"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.490781 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" (UID: "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.490803 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-config" (OuterVolumeSpecName: "config") pod "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" (UID: "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.496795 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" (UID: "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.496842 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-kube-api-access-jh4pz" (OuterVolumeSpecName: "kube-api-access-jh4pz") pod "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" (UID: "9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef"). InnerVolumeSpecName "kube-api-access-jh4pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.498646 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.591525 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncp7p\" (UniqueName: \"kubernetes.io/projected/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-kube-api-access-ncp7p\") pod \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.591689 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-serving-cert\") pod \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592156 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-proxy-ca-bundles\") pod \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592204 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-config\") pod \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592247 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-client-ca\") pod \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\" (UID: \"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05\") " Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592419 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvtn\" (UniqueName: \"kubernetes.io/projected/2401867d-7869-4633-aeeb-bfb3653c2786-kube-api-access-lrvtn\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592456 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2401867d-7869-4633-aeeb-bfb3653c2786-serving-cert\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592518 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-config\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592545 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-client-ca\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592828 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh4pz\" (UniqueName: \"kubernetes.io/projected/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-kube-api-access-jh4pz\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592860 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592923 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.592938 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.593069 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" (UID: "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.593124 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-client-ca" (OuterVolumeSpecName: "client-ca") pod "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" (UID: "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.593364 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-config" (OuterVolumeSpecName: "config") pod "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" (UID: "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.594927 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" (UID: "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.595154 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-kube-api-access-ncp7p" (OuterVolumeSpecName: "kube-api-access-ncp7p") pod "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" (UID: "2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05"). InnerVolumeSpecName "kube-api-access-ncp7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694050 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvtn\" (UniqueName: \"kubernetes.io/projected/2401867d-7869-4633-aeeb-bfb3653c2786-kube-api-access-lrvtn\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694128 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2401867d-7869-4633-aeeb-bfb3653c2786-serving-cert\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694219 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-config\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694275 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-client-ca\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694417 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694448 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694480 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694506 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.694531 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncp7p\" (UniqueName: \"kubernetes.io/projected/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05-kube-api-access-ncp7p\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.695443 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-config\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.695975 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-client-ca\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.700863 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2401867d-7869-4633-aeeb-bfb3653c2786-serving-cert\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.712938 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvtn\" (UniqueName: \"kubernetes.io/projected/2401867d-7869-4633-aeeb-bfb3653c2786-kube-api-access-lrvtn\") pod \"route-controller-manager-658c669f4d-v69zt\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:57 crc kubenswrapper[4870]: I0130 08:12:57.833094 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.138485 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" event={"ID":"9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef","Type":"ContainerDied","Data":"82eb7cd80398c1cdbf2869d9e797591e5a6a4357ecdd3aada431258c621248cb"} Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.138884 4870 scope.go:117] "RemoveContainer" containerID="c5fa843ed751475d23b4cc7c99550ca1b426459de45a94b59e3341fcd829105d" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.138657 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.142159 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" event={"ID":"2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05","Type":"ContainerDied","Data":"29687b205e4bfe58a29a1a38844f13e1a74004c67624566a0e52141eca348418"} Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.142189 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6648756f-h84rs" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.167041 4870 scope.go:117] "RemoveContainer" containerID="df06edc5cedd10f1cf6063d24bbddba4264e6ef76993b8981ce5385c15ccb756" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.171101 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h"] Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.181831 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f547cf58-vv68h"] Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.190654 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c6648756f-h84rs"] Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.193699 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c6648756f-h84rs"] Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.242228 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt"] Jan 30 08:12:58 crc kubenswrapper[4870]: W0130 08:12:58.250060 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2401867d_7869_4633_aeeb_bfb3653c2786.slice/crio-a4cce0bf268510416f33b5ed9e80efbf966fec2a36ba47966d7bf118bdd855e6 WatchSource:0}: Error finding container a4cce0bf268510416f33b5ed9e80efbf966fec2a36ba47966d7bf118bdd855e6: Status 404 returned error can't find the container with id a4cce0bf268510416f33b5ed9e80efbf966fec2a36ba47966d7bf118bdd855e6 Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.312683 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxbdr"] Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.312927 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nxbdr" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="registry-server" containerID="cri-o://c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69" gracePeriod=2 Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.713347 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.809740 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-utilities\") pod \"f5255b75-6d10-40f0-9d11-c975458382cb\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.809902 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmxfx\" (UniqueName: \"kubernetes.io/projected/f5255b75-6d10-40f0-9d11-c975458382cb-kube-api-access-hmxfx\") pod \"f5255b75-6d10-40f0-9d11-c975458382cb\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.809975 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-catalog-content\") pod \"f5255b75-6d10-40f0-9d11-c975458382cb\" (UID: \"f5255b75-6d10-40f0-9d11-c975458382cb\") " Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.811665 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-utilities" (OuterVolumeSpecName: "utilities") pod "f5255b75-6d10-40f0-9d11-c975458382cb" (UID: "f5255b75-6d10-40f0-9d11-c975458382cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.817976 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5255b75-6d10-40f0-9d11-c975458382cb-kube-api-access-hmxfx" (OuterVolumeSpecName: "kube-api-access-hmxfx") pod "f5255b75-6d10-40f0-9d11-c975458382cb" (UID: "f5255b75-6d10-40f0-9d11-c975458382cb"). InnerVolumeSpecName "kube-api-access-hmxfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.911731 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmxfx\" (UniqueName: \"kubernetes.io/projected/f5255b75-6d10-40f0-9d11-c975458382cb-kube-api-access-hmxfx\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.911774 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:58 crc kubenswrapper[4870]: I0130 08:12:58.956269 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5255b75-6d10-40f0-9d11-c975458382cb" (UID: "f5255b75-6d10-40f0-9d11-c975458382cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.013727 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5255b75-6d10-40f0-9d11-c975458382cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.150383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" event={"ID":"2401867d-7869-4633-aeeb-bfb3653c2786","Type":"ContainerStarted","Data":"761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9"} Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.150429 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" event={"ID":"2401867d-7869-4633-aeeb-bfb3653c2786","Type":"ContainerStarted","Data":"a4cce0bf268510416f33b5ed9e80efbf966fec2a36ba47966d7bf118bdd855e6"} Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.150698 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.158741 4870 generic.go:334] "Generic (PLEG): container finished" podID="f5255b75-6d10-40f0-9d11-c975458382cb" containerID="c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69" exitCode=0 Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.158776 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerDied","Data":"c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69"} Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.158795 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxbdr" event={"ID":"f5255b75-6d10-40f0-9d11-c975458382cb","Type":"ContainerDied","Data":"75dc4d4ca08c96b6af316cef86b49419d2a6ad7374d685b482b8ff2fed0aeb65"} Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.158853 4870 scope.go:117] "RemoveContainer" containerID="c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.158973 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxbdr" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.161541 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.209661 4870 scope.go:117] "RemoveContainer" containerID="72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.219668 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" podStartSLOduration=3.2196494429999998 podStartE2EDuration="3.219649443s" podCreationTimestamp="2026-01-30 08:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:12:59.215593416 +0000 UTC m=+217.911140535" watchObservedRunningTime="2026-01-30 08:12:59.219649443 +0000 UTC m=+217.915196572" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.263693 4870 scope.go:117] "RemoveContainer" containerID="91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.282005 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxbdr"] Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.300846 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nxbdr"] Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.310218 4870 scope.go:117] "RemoveContainer" containerID="c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69" Jan 30 08:12:59 crc kubenswrapper[4870]: E0130 08:12:59.310826 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69\": container with ID starting with c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69 not found: ID does not exist" containerID="c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.310924 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69"} err="failed to get container status \"c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69\": rpc error: code = NotFound desc = could not find container \"c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69\": container with ID starting with c8d326de4781de23a826a5d5681c703c144d7d526b6723de04af2ff52d6c5c69 not found: ID does not exist" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.310958 4870 scope.go:117] "RemoveContainer" containerID="72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36" Jan 30 08:12:59 crc kubenswrapper[4870]: E0130 08:12:59.311397 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36\": container with ID starting with 72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36 not found: ID does not exist" containerID="72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.311455 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36"} err="failed to get container status \"72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36\": rpc error: code = NotFound desc = could not find container \"72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36\": container with ID starting with 72d65fd678209cea5f65f67e6fba57d68354274d2c312d3a2b34858c8f8e5d36 not found: ID does not exist" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.311491 4870 scope.go:117] "RemoveContainer" containerID="91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381" Jan 30 08:12:59 crc kubenswrapper[4870]: E0130 08:12:59.311800 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381\": container with ID starting with 91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381 not found: ID does not exist" containerID="91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381" Jan 30 08:12:59 crc kubenswrapper[4870]: I0130 08:12:59.311840 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381"} err="failed to get container status \"91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381\": rpc error: code = NotFound desc = could not find container \"91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381\": container with ID starting with 91c2f94ca5ba24911752768fbf8405337095dc7b2ce3321b5d0889c6d9e3f381 not found: ID does not exist" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.083329 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" path="/var/lib/kubelet/pods/2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05/volumes" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.084452 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef" path="/var/lib/kubelet/pods/9e4a05ab-16ef-4d68-9e7e-a9064ce0a4ef/volumes" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.085158 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" path="/var/lib/kubelet/pods/f5255b75-6d10-40f0-9d11-c975458382cb/volumes" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.190667 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c6cdccc5f-szscj"] Jan 30 08:13:00 crc kubenswrapper[4870]: E0130 08:13:00.191014 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="extract-content" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191034 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="extract-content" Jan 30 08:13:00 crc kubenswrapper[4870]: E0130 08:13:00.191048 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="registry-server" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191056 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="registry-server" Jan 30 08:13:00 crc kubenswrapper[4870]: E0130 08:13:00.191074 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="extract-utilities" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191108 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="extract-utilities" Jan 30 08:13:00 crc kubenswrapper[4870]: E0130 08:13:00.191123 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" containerName="controller-manager" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191131 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" containerName="controller-manager" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191255 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2cfd8-50f7-49c7-a7e8-45e7675e9f05" containerName="controller-manager" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191270 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5255b75-6d10-40f0-9d11-c975458382cb" containerName="registry-server" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.191749 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.193960 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.195531 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.196079 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.197076 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.197154 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.197191 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.201366 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6cdccc5f-szscj"] Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.206423 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.234526 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b32bd5-0420-437c-abe3-b568b5fced25-serving-cert\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.234598 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-config\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.234675 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2llk\" (UniqueName: \"kubernetes.io/projected/f0b32bd5-0420-437c-abe3-b568b5fced25-kube-api-access-n2llk\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.234741 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-proxy-ca-bundles\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.234816 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-client-ca\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.336137 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b32bd5-0420-437c-abe3-b568b5fced25-serving-cert\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.336216 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-config\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.336273 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2llk\" (UniqueName: \"kubernetes.io/projected/f0b32bd5-0420-437c-abe3-b568b5fced25-kube-api-access-n2llk\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.336314 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-proxy-ca-bundles\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.336373 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-client-ca\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.338041 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-client-ca\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.339205 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-proxy-ca-bundles\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.339686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-config\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.342083 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b32bd5-0420-437c-abe3-b568b5fced25-serving-cert\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.358110 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2llk\" (UniqueName: \"kubernetes.io/projected/f0b32bd5-0420-437c-abe3-b568b5fced25-kube-api-access-n2llk\") pod \"controller-manager-5c6cdccc5f-szscj\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.523484 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.686415 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.686761 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:13:00 crc kubenswrapper[4870]: I0130 08:13:00.754491 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.015095 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6cdccc5f-szscj"] Jan 30 08:13:01 crc kubenswrapper[4870]: W0130 08:13:01.027393 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b32bd5_0420_437c_abe3_b568b5fced25.slice/crio-9ddf8fa07886379822b5d7e837b006d9d1c35dacedfc83c71d1a7c67e61b89cf WatchSource:0}: Error finding container 9ddf8fa07886379822b5d7e837b006d9d1c35dacedfc83c71d1a7c67e61b89cf: Status 404 returned error can't find the container with id 9ddf8fa07886379822b5d7e837b006d9d1c35dacedfc83c71d1a7c67e61b89cf Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.054331 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.054385 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.101663 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.143293 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.143340 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.181851 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" event={"ID":"f0b32bd5-0420-437c-abe3-b568b5fced25","Type":"ContainerStarted","Data":"9ddf8fa07886379822b5d7e837b006d9d1c35dacedfc83c71d1a7c67e61b89cf"} Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.204032 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.237986 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.241322 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.253921 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.285976 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.286037 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:13:01 crc kubenswrapper[4870]: I0130 08:13:01.355255 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:13:02 crc kubenswrapper[4870]: I0130 08:13:02.189763 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" event={"ID":"f0b32bd5-0420-437c-abe3-b568b5fced25","Type":"ContainerStarted","Data":"e0c909e10ad146c5f25ddc4c306ee746d3ae6daa59a6ed6f36b3bed06976cc14"} Jan 30 08:13:02 crc kubenswrapper[4870]: I0130 08:13:02.213415 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" podStartSLOduration=6.213399202 podStartE2EDuration="6.213399202s" podCreationTimestamp="2026-01-30 08:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:13:02.211371748 +0000 UTC m=+220.906918857" watchObservedRunningTime="2026-01-30 08:13:02.213399202 +0000 UTC m=+220.908946311" Jan 30 08:13:02 crc kubenswrapper[4870]: I0130 08:13:02.248264 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.048570 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.048629 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.093580 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.131446 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sdlrf"] Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.198205 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.199012 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sdlrf" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="registry-server" containerID="cri-o://0ab86db3891506c1e61fe197330cdb4401ed14cf6adc50c8af9e3081bf1ba9b5" gracePeriod=2 Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.208745 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:03 crc kubenswrapper[4870]: I0130 08:13:03.270809 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.113385 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vm685"] Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.217849 4870 generic.go:334] "Generic (PLEG): container finished" podID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerID="0ab86db3891506c1e61fe197330cdb4401ed14cf6adc50c8af9e3081bf1ba9b5" exitCode=0 Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.219403 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdlrf" event={"ID":"e02d35f8-2e8c-47a3-87c9-9580ab766290","Type":"ContainerDied","Data":"0ab86db3891506c1e61fe197330cdb4401ed14cf6adc50c8af9e3081bf1ba9b5"} Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.219598 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vm685" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="registry-server" containerID="cri-o://fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344" gracePeriod=2 Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.361676 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.402535 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-catalog-content\") pod \"e02d35f8-2e8c-47a3-87c9-9580ab766290\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.402590 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-utilities\") pod \"e02d35f8-2e8c-47a3-87c9-9580ab766290\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.402614 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wlfz\" (UniqueName: \"kubernetes.io/projected/e02d35f8-2e8c-47a3-87c9-9580ab766290-kube-api-access-6wlfz\") pod \"e02d35f8-2e8c-47a3-87c9-9580ab766290\" (UID: \"e02d35f8-2e8c-47a3-87c9-9580ab766290\") " Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.404303 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-utilities" (OuterVolumeSpecName: "utilities") pod "e02d35f8-2e8c-47a3-87c9-9580ab766290" (UID: "e02d35f8-2e8c-47a3-87c9-9580ab766290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.426674 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02d35f8-2e8c-47a3-87c9-9580ab766290-kube-api-access-6wlfz" (OuterVolumeSpecName: "kube-api-access-6wlfz") pod "e02d35f8-2e8c-47a3-87c9-9580ab766290" (UID: "e02d35f8-2e8c-47a3-87c9-9580ab766290"). InnerVolumeSpecName "kube-api-access-6wlfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.465397 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e02d35f8-2e8c-47a3-87c9-9580ab766290" (UID: "e02d35f8-2e8c-47a3-87c9-9580ab766290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.506975 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.507041 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wlfz\" (UniqueName: \"kubernetes.io/projected/e02d35f8-2e8c-47a3-87c9-9580ab766290-kube-api-access-6wlfz\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.507145 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02d35f8-2e8c-47a3-87c9-9580ab766290-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.673312 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.709301 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-catalog-content\") pod \"abc41080-75c5-421f-baa8-f05792f74564\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.709400 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhq7x\" (UniqueName: \"kubernetes.io/projected/abc41080-75c5-421f-baa8-f05792f74564-kube-api-access-lhq7x\") pod \"abc41080-75c5-421f-baa8-f05792f74564\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.709502 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-utilities\") pod \"abc41080-75c5-421f-baa8-f05792f74564\" (UID: \"abc41080-75c5-421f-baa8-f05792f74564\") " Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.710415 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-utilities" (OuterVolumeSpecName: "utilities") pod "abc41080-75c5-421f-baa8-f05792f74564" (UID: "abc41080-75c5-421f-baa8-f05792f74564"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.712887 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc41080-75c5-421f-baa8-f05792f74564-kube-api-access-lhq7x" (OuterVolumeSpecName: "kube-api-access-lhq7x") pod "abc41080-75c5-421f-baa8-f05792f74564" (UID: "abc41080-75c5-421f-baa8-f05792f74564"). InnerVolumeSpecName "kube-api-access-lhq7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.767124 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abc41080-75c5-421f-baa8-f05792f74564" (UID: "abc41080-75c5-421f-baa8-f05792f74564"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.812248 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.812313 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc41080-75c5-421f-baa8-f05792f74564-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:04 crc kubenswrapper[4870]: I0130 08:13:04.812341 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhq7x\" (UniqueName: \"kubernetes.io/projected/abc41080-75c5-421f-baa8-f05792f74564-kube-api-access-lhq7x\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.225558 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdlrf" event={"ID":"e02d35f8-2e8c-47a3-87c9-9580ab766290","Type":"ContainerDied","Data":"17d6a9bdca6c16fe2977a640455c60bcc06dd2ad4ecdc2b9c6411506d215c0be"} Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.226032 4870 scope.go:117] "RemoveContainer" containerID="0ab86db3891506c1e61fe197330cdb4401ed14cf6adc50c8af9e3081bf1ba9b5" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.225634 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdlrf" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.229947 4870 generic.go:334] "Generic (PLEG): container finished" podID="abc41080-75c5-421f-baa8-f05792f74564" containerID="fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344" exitCode=0 Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.230042 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm685" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.230039 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm685" event={"ID":"abc41080-75c5-421f-baa8-f05792f74564","Type":"ContainerDied","Data":"fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344"} Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.230096 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm685" event={"ID":"abc41080-75c5-421f-baa8-f05792f74564","Type":"ContainerDied","Data":"96518355e0bd9b243a322652ed93adea62f75e712bc08772e1f193f3dde1d1a9"} Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.254268 4870 scope.go:117] "RemoveContainer" containerID="05623975ae7b461659818593078417aaae1d2eaf3b57481a7bebadeb7853e38b" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.284349 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sdlrf"] Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.289744 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sdlrf"] Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.299450 4870 scope.go:117] "RemoveContainer" containerID="1ecf5db22e2b1fa8547549ce582ecddde377bbe670b1f97e03e9e9e6f42d4dae" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.325955 4870 scope.go:117] "RemoveContainer" containerID="fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.330224 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vm685"] Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.331852 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vm685"] Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.353546 4870 scope.go:117] "RemoveContainer" containerID="a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.368374 4870 scope.go:117] "RemoveContainer" containerID="c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.385620 4870 scope.go:117] "RemoveContainer" containerID="fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344" Jan 30 08:13:05 crc kubenswrapper[4870]: E0130 08:13:05.386271 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344\": container with ID starting with fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344 not found: ID does not exist" containerID="fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.386303 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344"} err="failed to get container status \"fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344\": rpc error: code = NotFound desc = could not find container \"fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344\": container with ID starting with fd4c89b8cbd81ba1ad1c068e8582d00bddadd898a4dcfa9e47b3ae08c6dd6344 not found: ID does not exist" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.386324 4870 scope.go:117] "RemoveContainer" containerID="a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028" Jan 30 08:13:05 crc kubenswrapper[4870]: E0130 08:13:05.386886 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028\": container with ID starting with a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028 not found: ID does not exist" containerID="a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.386927 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028"} err="failed to get container status \"a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028\": rpc error: code = NotFound desc = could not find container \"a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028\": container with ID starting with a50ef1faa7c24f23fbc57c47c06d93445c8919db7d381cb38dc62d4428844028 not found: ID does not exist" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.386958 4870 scope.go:117] "RemoveContainer" containerID="c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354" Jan 30 08:13:05 crc kubenswrapper[4870]: E0130 08:13:05.387499 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354\": container with ID starting with c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354 not found: ID does not exist" containerID="c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.387528 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354"} err="failed to get container status \"c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354\": rpc error: code = NotFound desc = could not find container \"c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354\": container with ID starting with c5a44e3995c68d711cf7ffd635d1bf773b8cff7c52ede4919575b0156f885354 not found: ID does not exist" Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.513945 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddg46"] Jan 30 08:13:05 crc kubenswrapper[4870]: I0130 08:13:05.514195 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ddg46" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="registry-server" containerID="cri-o://7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70" gracePeriod=2 Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.005114 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.085796 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc41080-75c5-421f-baa8-f05792f74564" path="/var/lib/kubelet/pods/abc41080-75c5-421f-baa8-f05792f74564/volumes" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.087316 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" path="/var/lib/kubelet/pods/e02d35f8-2e8c-47a3-87c9-9580ab766290/volumes" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.137295 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sz97\" (UniqueName: \"kubernetes.io/projected/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-kube-api-access-4sz97\") pod \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.137841 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-catalog-content\") pod \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.137974 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-utilities\") pod \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\" (UID: \"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2\") " Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.138948 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-utilities" (OuterVolumeSpecName: "utilities") pod "025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" (UID: "025ee8c8-8a97-4158-88fb-c4fa23f5c9c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.144604 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-kube-api-access-4sz97" (OuterVolumeSpecName: "kube-api-access-4sz97") pod "025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" (UID: "025ee8c8-8a97-4158-88fb-c4fa23f5c9c2"). InnerVolumeSpecName "kube-api-access-4sz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.168545 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" (UID: "025ee8c8-8a97-4158-88fb-c4fa23f5c9c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.240479 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.240549 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sz97\" (UniqueName: \"kubernetes.io/projected/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-kube-api-access-4sz97\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.240574 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.243568 4870 generic.go:334] "Generic (PLEG): container finished" podID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerID="7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70" exitCode=0 Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.243642 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddg46" event={"ID":"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2","Type":"ContainerDied","Data":"7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70"} Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.243680 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddg46" event={"ID":"025ee8c8-8a97-4158-88fb-c4fa23f5c9c2","Type":"ContainerDied","Data":"a0bdc36a8576d5c25a0097622d42f72393c74577381da880313d27ca87e33cc7"} Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.243700 4870 scope.go:117] "RemoveContainer" containerID="7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.243955 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddg46" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.277583 4870 scope.go:117] "RemoveContainer" containerID="4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.285855 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddg46"] Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.293865 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddg46"] Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.307838 4870 scope.go:117] "RemoveContainer" containerID="3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.329439 4870 scope.go:117] "RemoveContainer" containerID="7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70" Jan 30 08:13:06 crc kubenswrapper[4870]: E0130 08:13:06.329945 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70\": container with ID starting with 7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70 not found: ID does not exist" containerID="7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.329989 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70"} err="failed to get container status \"7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70\": rpc error: code = NotFound desc = could not find container \"7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70\": container with ID starting with 7b76f09aa9f9994ca61e250ea9fcc87f7cc8e5437b6738688da0a44244a55f70 not found: ID does not exist" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.330019 4870 scope.go:117] "RemoveContainer" containerID="4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4" Jan 30 08:13:06 crc kubenswrapper[4870]: E0130 08:13:06.330548 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4\": container with ID starting with 4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4 not found: ID does not exist" containerID="4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.330622 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4"} err="failed to get container status \"4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4\": rpc error: code = NotFound desc = could not find container \"4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4\": container with ID starting with 4de26b3246c8808449e770399c4f7fa4183c010b1afab00b9d0e527740a08bc4 not found: ID does not exist" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.330673 4870 scope.go:117] "RemoveContainer" containerID="3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c" Jan 30 08:13:06 crc kubenswrapper[4870]: E0130 08:13:06.331055 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c\": container with ID starting with 3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c not found: ID does not exist" containerID="3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.331087 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c"} err="failed to get container status \"3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c\": rpc error: code = NotFound desc = could not find container \"3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c\": container with ID starting with 3038fe3301b881e13deef165b3387577b2b42b2fdbf788c35bdc741aa4ed718c not found: ID does not exist" Jan 30 08:13:06 crc kubenswrapper[4870]: I0130 08:13:06.652705 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" podUID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" containerName="oauth-openshift" containerID="cri-o://102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789" gracePeriod=15 Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.197687 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.256828 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-service-ca\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.256900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-session\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.256926 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-login\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.256965 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-error\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.256988 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-idp-0-file-data\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.257020 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-dir\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.257267 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258221 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-provider-selection\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258274 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-router-certs\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258323 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dp8v\" (UniqueName: \"kubernetes.io/projected/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-kube-api-access-4dp8v\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258362 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-ocp-branding-template\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258393 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-cliconfig\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258622 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-trusted-ca-bundle\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258646 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-serving-cert\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258641 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258677 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-policies\") pod \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\" (UID: \"d4876c72-6cd1-43e0-b44a-45c4bd69e91f\") " Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258905 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.258919 4870 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.259365 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.259834 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.260465 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.262696 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.263105 4870 generic.go:334] "Generic (PLEG): container finished" podID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" containerID="102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789" exitCode=0 Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.263241 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" event={"ID":"d4876c72-6cd1-43e0-b44a-45c4bd69e91f","Type":"ContainerDied","Data":"102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789"} Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.263282 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" event={"ID":"d4876c72-6cd1-43e0-b44a-45c4bd69e91f","Type":"ContainerDied","Data":"94d52f9687de877d5fd97b94963947e16acbe6d1f11849a8cb9317ae4e717ce7"} Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.263303 4870 scope.go:117] "RemoveContainer" containerID="102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.263425 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xxrkx" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.265175 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.267157 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.270068 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.270343 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.271342 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.272041 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.272449 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-kube-api-access-4dp8v" (OuterVolumeSpecName: "kube-api-access-4dp8v") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "kube-api-access-4dp8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.274469 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d4876c72-6cd1-43e0-b44a-45c4bd69e91f" (UID: "d4876c72-6cd1-43e0-b44a-45c4bd69e91f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.302159 4870 scope.go:117] "RemoveContainer" containerID="102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789" Jan 30 08:13:07 crc kubenswrapper[4870]: E0130 08:13:07.302745 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789\": container with ID starting with 102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789 not found: ID does not exist" containerID="102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.302788 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789"} err="failed to get container status \"102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789\": rpc error: code = NotFound desc = could not find container \"102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789\": container with ID starting with 102b04420a2d27e2c42980d96dcb76309f8b40b50075cb8c01ca29cf54abe789 not found: ID does not exist" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360508 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360541 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360558 4870 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360571 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360582 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360590 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360600 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360611 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360623 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360635 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dp8v\" (UniqueName: \"kubernetes.io/projected/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-kube-api-access-4dp8v\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360650 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.360663 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4876c72-6cd1-43e0-b44a-45c4bd69e91f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.605276 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xxrkx"] Jan 30 08:13:07 crc kubenswrapper[4870]: I0130 08:13:07.609029 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xxrkx"] Jan 30 08:13:08 crc kubenswrapper[4870]: I0130 08:13:08.082853 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" path="/var/lib/kubelet/pods/025ee8c8-8a97-4158-88fb-c4fa23f5c9c2/volumes" Jan 30 08:13:08 crc kubenswrapper[4870]: I0130 08:13:08.084262 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" path="/var/lib/kubelet/pods/d4876c72-6cd1-43e0-b44a-45c4bd69e91f/volumes" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.206579 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-dc8679f5f-mdxn5"] Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.206924 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="extract-utilities" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.206944 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="extract-utilities" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.206961 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.206971 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.206987 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="extract-content" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.206998 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="extract-content" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207012 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" containerName="oauth-openshift" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207023 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" containerName="oauth-openshift" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207043 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207053 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207071 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="extract-utilities" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207080 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="extract-utilities" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207089 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207097 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207115 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="extract-content" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207123 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="extract-content" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207154 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="extract-utilities" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207162 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="extract-utilities" Jan 30 08:13:10 crc kubenswrapper[4870]: E0130 08:13:10.207172 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="extract-content" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207181 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="extract-content" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207316 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4876c72-6cd1-43e0-b44a-45c4bd69e91f" containerName="oauth-openshift" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207331 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="025ee8c8-8a97-4158-88fb-c4fa23f5c9c2" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207341 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02d35f8-2e8c-47a3-87c9-9580ab766290" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207353 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc41080-75c5-421f-baa8-f05792f74564" containerName="registry-server" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.207913 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.214639 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.214701 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.215091 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.217459 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.217643 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.218544 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.218934 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.219107 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.219198 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.220089 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.220610 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.222124 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.230254 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.234561 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-dc8679f5f-mdxn5"] Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.236916 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.241603 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304472 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-session\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304560 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304646 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304679 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304717 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304749 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.304963 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.305067 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnz92\" (UniqueName: \"kubernetes.io/projected/5663080a-bd5b-4cfd-84be-13421571ce8a-kube-api-access-xnz92\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.305110 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5663080a-bd5b-4cfd-84be-13421571ce8a-audit-dir\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.305190 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.305231 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-error\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.305475 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-audit-policies\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.305540 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-login\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.407653 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.407765 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.407822 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.407867 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.407956 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408049 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnz92\" (UniqueName: \"kubernetes.io/projected/5663080a-bd5b-4cfd-84be-13421571ce8a-kube-api-access-xnz92\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5663080a-bd5b-4cfd-84be-13421571ce8a-audit-dir\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408125 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408160 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-error\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408224 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-audit-policies\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408261 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-login\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408299 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-session\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408351 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.408391 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.410243 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.411194 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-audit-policies\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.410312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.412011 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.411135 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5663080a-bd5b-4cfd-84be-13421571ce8a-audit-dir\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.414665 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.414689 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-error\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.414702 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.416521 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.416751 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-session\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.417518 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.417926 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-user-template-login\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.418214 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5663080a-bd5b-4cfd-84be-13421571ce8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.435818 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnz92\" (UniqueName: \"kubernetes.io/projected/5663080a-bd5b-4cfd-84be-13421571ce8a-kube-api-access-xnz92\") pod \"oauth-openshift-dc8679f5f-mdxn5\" (UID: \"5663080a-bd5b-4cfd-84be-13421571ce8a\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:10 crc kubenswrapper[4870]: I0130 08:13:10.553491 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:11 crc kubenswrapper[4870]: I0130 08:13:11.009691 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-dc8679f5f-mdxn5"] Jan 30 08:13:11 crc kubenswrapper[4870]: W0130 08:13:11.025774 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5663080a_bd5b_4cfd_84be_13421571ce8a.slice/crio-2b6e8c53aef34696d37ddc955b0f109b7ba6d75d043ea54fa91476cc734a3687 WatchSource:0}: Error finding container 2b6e8c53aef34696d37ddc955b0f109b7ba6d75d043ea54fa91476cc734a3687: Status 404 returned error can't find the container with id 2b6e8c53aef34696d37ddc955b0f109b7ba6d75d043ea54fa91476cc734a3687 Jan 30 08:13:11 crc kubenswrapper[4870]: I0130 08:13:11.299406 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" event={"ID":"5663080a-bd5b-4cfd-84be-13421571ce8a","Type":"ContainerStarted","Data":"2b6e8c53aef34696d37ddc955b0f109b7ba6d75d043ea54fa91476cc734a3687"} Jan 30 08:13:12 crc kubenswrapper[4870]: I0130 08:13:12.307356 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" event={"ID":"5663080a-bd5b-4cfd-84be-13421571ce8a","Type":"ContainerStarted","Data":"0cf2ca9535702238062d0f3eec5b2232879212588fd08767b46dd1c00eefa89b"} Jan 30 08:13:12 crc kubenswrapper[4870]: I0130 08:13:12.307814 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:12 crc kubenswrapper[4870]: I0130 08:13:12.313357 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" Jan 30 08:13:12 crc kubenswrapper[4870]: I0130 08:13:12.335209 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-dc8679f5f-mdxn5" podStartSLOduration=31.33519352 podStartE2EDuration="31.33519352s" podCreationTimestamp="2026-01-30 08:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:13:12.331870076 +0000 UTC m=+231.027417195" watchObservedRunningTime="2026-01-30 08:13:12.33519352 +0000 UTC m=+231.030740629" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.119667 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c6cdccc5f-szscj"] Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.120929 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" podUID="f0b32bd5-0420-437c-abe3-b568b5fced25" containerName="controller-manager" containerID="cri-o://e0c909e10ad146c5f25ddc4c306ee746d3ae6daa59a6ed6f36b3bed06976cc14" gracePeriod=30 Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.216896 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt"] Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.217218 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" podUID="2401867d-7869-4633-aeeb-bfb3653c2786" containerName="route-controller-manager" containerID="cri-o://761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9" gracePeriod=30 Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.334431 4870 generic.go:334] "Generic (PLEG): container finished" podID="f0b32bd5-0420-437c-abe3-b568b5fced25" containerID="e0c909e10ad146c5f25ddc4c306ee746d3ae6daa59a6ed6f36b3bed06976cc14" exitCode=0 Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.334496 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" event={"ID":"f0b32bd5-0420-437c-abe3-b568b5fced25","Type":"ContainerDied","Data":"e0c909e10ad146c5f25ddc4c306ee746d3ae6daa59a6ed6f36b3bed06976cc14"} Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.701418 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.789372 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.821830 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-client-ca" (OuterVolumeSpecName: "client-ca") pod "2401867d-7869-4633-aeeb-bfb3653c2786" (UID: "2401867d-7869-4633-aeeb-bfb3653c2786"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.820588 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-client-ca\") pod \"2401867d-7869-4633-aeeb-bfb3653c2786\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.822086 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2401867d-7869-4633-aeeb-bfb3653c2786-serving-cert\") pod \"2401867d-7869-4633-aeeb-bfb3653c2786\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.823003 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrvtn\" (UniqueName: \"kubernetes.io/projected/2401867d-7869-4633-aeeb-bfb3653c2786-kube-api-access-lrvtn\") pod \"2401867d-7869-4633-aeeb-bfb3653c2786\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.823431 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-config\") pod \"2401867d-7869-4633-aeeb-bfb3653c2786\" (UID: \"2401867d-7869-4633-aeeb-bfb3653c2786\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.823835 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.826584 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-config" (OuterVolumeSpecName: "config") pod "2401867d-7869-4633-aeeb-bfb3653c2786" (UID: "2401867d-7869-4633-aeeb-bfb3653c2786"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.830732 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2401867d-7869-4633-aeeb-bfb3653c2786-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2401867d-7869-4633-aeeb-bfb3653c2786" (UID: "2401867d-7869-4633-aeeb-bfb3653c2786"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.831680 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2401867d-7869-4633-aeeb-bfb3653c2786-kube-api-access-lrvtn" (OuterVolumeSpecName: "kube-api-access-lrvtn") pod "2401867d-7869-4633-aeeb-bfb3653c2786" (UID: "2401867d-7869-4633-aeeb-bfb3653c2786"). InnerVolumeSpecName "kube-api-access-lrvtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.925521 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-client-ca\") pod \"f0b32bd5-0420-437c-abe3-b568b5fced25\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.925617 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-proxy-ca-bundles\") pod \"f0b32bd5-0420-437c-abe3-b568b5fced25\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.925690 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2llk\" (UniqueName: \"kubernetes.io/projected/f0b32bd5-0420-437c-abe3-b568b5fced25-kube-api-access-n2llk\") pod \"f0b32bd5-0420-437c-abe3-b568b5fced25\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.925711 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-config\") pod \"f0b32bd5-0420-437c-abe3-b568b5fced25\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.925751 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b32bd5-0420-437c-abe3-b568b5fced25-serving-cert\") pod \"f0b32bd5-0420-437c-abe3-b568b5fced25\" (UID: \"f0b32bd5-0420-437c-abe3-b568b5fced25\") " Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.926079 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2401867d-7869-4633-aeeb-bfb3653c2786-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.926094 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrvtn\" (UniqueName: \"kubernetes.io/projected/2401867d-7869-4633-aeeb-bfb3653c2786-kube-api-access-lrvtn\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.926104 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2401867d-7869-4633-aeeb-bfb3653c2786-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.926774 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f0b32bd5-0420-437c-abe3-b568b5fced25" (UID: "f0b32bd5-0420-437c-abe3-b568b5fced25"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.926957 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-config" (OuterVolumeSpecName: "config") pod "f0b32bd5-0420-437c-abe3-b568b5fced25" (UID: "f0b32bd5-0420-437c-abe3-b568b5fced25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.927354 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-client-ca" (OuterVolumeSpecName: "client-ca") pod "f0b32bd5-0420-437c-abe3-b568b5fced25" (UID: "f0b32bd5-0420-437c-abe3-b568b5fced25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.929236 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b32bd5-0420-437c-abe3-b568b5fced25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f0b32bd5-0420-437c-abe3-b568b5fced25" (UID: "f0b32bd5-0420-437c-abe3-b568b5fced25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:13:16 crc kubenswrapper[4870]: I0130 08:13:16.929446 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b32bd5-0420-437c-abe3-b568b5fced25-kube-api-access-n2llk" (OuterVolumeSpecName: "kube-api-access-n2llk") pod "f0b32bd5-0420-437c-abe3-b568b5fced25" (UID: "f0b32bd5-0420-437c-abe3-b568b5fced25"). InnerVolumeSpecName "kube-api-access-n2llk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.027299 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.027361 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.027380 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2llk\" (UniqueName: \"kubernetes.io/projected/f0b32bd5-0420-437c-abe3-b568b5fced25-kube-api-access-n2llk\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.027399 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b32bd5-0420-437c-abe3-b568b5fced25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.027414 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b32bd5-0420-437c-abe3-b568b5fced25-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.206919 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5"] Jan 30 08:13:17 crc kubenswrapper[4870]: E0130 08:13:17.207176 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b32bd5-0420-437c-abe3-b568b5fced25" containerName="controller-manager" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.207191 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b32bd5-0420-437c-abe3-b568b5fced25" containerName="controller-manager" Jan 30 08:13:17 crc kubenswrapper[4870]: E0130 08:13:17.207211 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2401867d-7869-4633-aeeb-bfb3653c2786" containerName="route-controller-manager" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.207220 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2401867d-7869-4633-aeeb-bfb3653c2786" containerName="route-controller-manager" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.207323 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b32bd5-0420-437c-abe3-b568b5fced25" containerName="controller-manager" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.207337 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2401867d-7869-4633-aeeb-bfb3653c2786" containerName="route-controller-manager" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.207785 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.222739 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5"] Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.331453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-proxy-ca-bundles\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.331510 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-serving-cert\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.331557 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98cnb\" (UniqueName: \"kubernetes.io/projected/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-kube-api-access-98cnb\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.331739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-config\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.332016 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-client-ca\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.342669 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" event={"ID":"f0b32bd5-0420-437c-abe3-b568b5fced25","Type":"ContainerDied","Data":"9ddf8fa07886379822b5d7e837b006d9d1c35dacedfc83c71d1a7c67e61b89cf"} Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.342762 4870 scope.go:117] "RemoveContainer" containerID="e0c909e10ad146c5f25ddc4c306ee746d3ae6daa59a6ed6f36b3bed06976cc14" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.342690 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6cdccc5f-szscj" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.346535 4870 generic.go:334] "Generic (PLEG): container finished" podID="2401867d-7869-4633-aeeb-bfb3653c2786" containerID="761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9" exitCode=0 Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.346612 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" event={"ID":"2401867d-7869-4633-aeeb-bfb3653c2786","Type":"ContainerDied","Data":"761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9"} Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.346710 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" event={"ID":"2401867d-7869-4633-aeeb-bfb3653c2786","Type":"ContainerDied","Data":"a4cce0bf268510416f33b5ed9e80efbf966fec2a36ba47966d7bf118bdd855e6"} Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.346648 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.362273 4870 scope.go:117] "RemoveContainer" containerID="761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.381530 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c6cdccc5f-szscj"] Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.386714 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c6cdccc5f-szscj"] Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.390665 4870 scope.go:117] "RemoveContainer" containerID="761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9" Jan 30 08:13:17 crc kubenswrapper[4870]: E0130 08:13:17.395240 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9\": container with ID starting with 761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9 not found: ID does not exist" containerID="761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.395349 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9"} err="failed to get container status \"761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9\": rpc error: code = NotFound desc = could not find container \"761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9\": container with ID starting with 761594cbe8a68e508978ea78d2572aa1dd60ea6b5eadf680df6e2838d7862ec9 not found: ID does not exist" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.398607 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt"] Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.403325 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658c669f4d-v69zt"] Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.433043 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-proxy-ca-bundles\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.433104 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-serving-cert\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.433150 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98cnb\" (UniqueName: \"kubernetes.io/projected/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-kube-api-access-98cnb\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.433188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-config\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.433237 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-client-ca\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.434508 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-proxy-ca-bundles\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.434516 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-client-ca\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.436584 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-config\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.443351 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-serving-cert\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.451434 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98cnb\" (UniqueName: \"kubernetes.io/projected/6c410aef-bbc0-4b86-9693-8fea3d6a2b52-kube-api-access-98cnb\") pod \"controller-manager-d47b9bcf6-rxwq5\" (UID: \"6c410aef-bbc0-4b86-9693-8fea3d6a2b52\") " pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.522384 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:17 crc kubenswrapper[4870]: I0130 08:13:17.727936 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5"] Jan 30 08:13:17 crc kubenswrapper[4870]: W0130 08:13:17.736479 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c410aef_bbc0_4b86_9693_8fea3d6a2b52.slice/crio-f597b452bb789ad048934aa20d33873fc1ab56b135ee49a8010312e54a2b7a86 WatchSource:0}: Error finding container f597b452bb789ad048934aa20d33873fc1ab56b135ee49a8010312e54a2b7a86: Status 404 returned error can't find the container with id f597b452bb789ad048934aa20d33873fc1ab56b135ee49a8010312e54a2b7a86 Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.088180 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2401867d-7869-4633-aeeb-bfb3653c2786" path="/var/lib/kubelet/pods/2401867d-7869-4633-aeeb-bfb3653c2786/volumes" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.089544 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b32bd5-0420-437c-abe3-b568b5fced25" path="/var/lib/kubelet/pods/f0b32bd5-0420-437c-abe3-b568b5fced25/volumes" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.206331 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-995758d5-t7n56"] Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.207243 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.209751 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.210224 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.210585 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.211077 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.212905 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.213633 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.235949 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-995758d5-t7n56"] Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.348295 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-config\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.348385 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-client-ca\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.348463 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-serving-cert\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.348494 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d26w\" (UniqueName: \"kubernetes.io/projected/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-kube-api-access-2d26w\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.354308 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" event={"ID":"6c410aef-bbc0-4b86-9693-8fea3d6a2b52","Type":"ContainerStarted","Data":"976ddcd8c849be741323e926680bf57ea7aee95de99ac166b0471fdaa18680e5"} Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.354568 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.354668 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" event={"ID":"6c410aef-bbc0-4b86-9693-8fea3d6a2b52","Type":"ContainerStarted","Data":"f597b452bb789ad048934aa20d33873fc1ab56b135ee49a8010312e54a2b7a86"} Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.360341 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.371684 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d47b9bcf6-rxwq5" podStartSLOduration=2.37166018 podStartE2EDuration="2.37166018s" podCreationTimestamp="2026-01-30 08:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:13:18.371461544 +0000 UTC m=+237.067008653" watchObservedRunningTime="2026-01-30 08:13:18.37166018 +0000 UTC m=+237.067207299" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.450386 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-serving-cert\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.450476 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d26w\" (UniqueName: \"kubernetes.io/projected/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-kube-api-access-2d26w\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.450533 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-config\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.450585 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-client-ca\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.451754 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-client-ca\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.451969 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-config\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.459774 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-serving-cert\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.468149 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d26w\" (UniqueName: \"kubernetes.io/projected/ccfb579a-39e5-4f92-bb80-ac591fe08c9d-kube-api-access-2d26w\") pod \"route-controller-manager-995758d5-t7n56\" (UID: \"ccfb579a-39e5-4f92-bb80-ac591fe08c9d\") " pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.523392 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:18 crc kubenswrapper[4870]: I0130 08:13:18.966440 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-995758d5-t7n56"] Jan 30 08:13:18 crc kubenswrapper[4870]: W0130 08:13:18.979432 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccfb579a_39e5_4f92_bb80_ac591fe08c9d.slice/crio-0e40cbbbe778ba97f80414f9b2ea05752afbc5dbc3782b4562ed7d64c2c7ac1c WatchSource:0}: Error finding container 0e40cbbbe778ba97f80414f9b2ea05752afbc5dbc3782b4562ed7d64c2c7ac1c: Status 404 returned error can't find the container with id 0e40cbbbe778ba97f80414f9b2ea05752afbc5dbc3782b4562ed7d64c2c7ac1c Jan 30 08:13:19 crc kubenswrapper[4870]: I0130 08:13:19.367469 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" event={"ID":"ccfb579a-39e5-4f92-bb80-ac591fe08c9d","Type":"ContainerStarted","Data":"09b638e3f1ceda7ea050b187880eb53408840e59511bb58cb7bdfb7a4aeced91"} Jan 30 08:13:19 crc kubenswrapper[4870]: I0130 08:13:19.368236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" event={"ID":"ccfb579a-39e5-4f92-bb80-ac591fe08c9d","Type":"ContainerStarted","Data":"0e40cbbbe778ba97f80414f9b2ea05752afbc5dbc3782b4562ed7d64c2c7ac1c"} Jan 30 08:13:20 crc kubenswrapper[4870]: I0130 08:13:20.373647 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:20 crc kubenswrapper[4870]: I0130 08:13:20.378650 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" Jan 30 08:13:20 crc kubenswrapper[4870]: I0130 08:13:20.404247 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-995758d5-t7n56" podStartSLOduration=4.404225728 podStartE2EDuration="4.404225728s" podCreationTimestamp="2026-01-30 08:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:13:19.403023426 +0000 UTC m=+238.098570575" watchObservedRunningTime="2026-01-30 08:13:20.404225728 +0000 UTC m=+239.099772837" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.421018 4870 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.422326 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230" gracePeriod=15 Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.422507 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88" gracePeriod=15 Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.422597 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f" gracePeriod=15 Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.422492 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c" gracePeriod=15 Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.422693 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3" gracePeriod=15 Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.423675 4870 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424019 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424044 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424059 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424065 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424074 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424080 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424095 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424100 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424114 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424120 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424126 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424132 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.424138 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424144 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424260 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424269 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424284 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424293 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424301 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.424635 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.425996 4870 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.426534 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.431515 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.499714 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516262 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516358 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516396 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516449 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516474 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516495 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516523 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.516577 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618267 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618345 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618376 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618413 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618431 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618448 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618474 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618515 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618603 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618650 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618673 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618696 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618736 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618928 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.618977 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: I0130 08:13:28.800959 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:28 crc kubenswrapper[4870]: E0130 08:13:28.827093 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f74143b77aeea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:13:28.825761514 +0000 UTC m=+247.521308623,LastTimestamp:2026-01-30 08:13:28.825761514 +0000 UTC m=+247.521308623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.433953 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1"} Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.434389 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2ffbf6ecdfdc578ae4eb5e6146f7bb20cf934023f01259cb24c87c1a90430b78"} Jan 30 08:13:29 crc kubenswrapper[4870]: E0130 08:13:29.435717 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.436892 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.438580 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.439339 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c" exitCode=0 Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.439380 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88" exitCode=0 Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.439391 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3" exitCode=0 Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.439400 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f" exitCode=2 Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.439436 4870 scope.go:117] "RemoveContainer" containerID="6fcd8a03faee764828ab3350a415aa8068f0c3d2dbb53c85fca720b11996a596" Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.442162 4870 generic.go:334] "Generic (PLEG): container finished" podID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" containerID="870ae255fc8aa69089480c5b4f44f2d48029e57db6c300a41e2ada010df31423" exitCode=0 Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.442213 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc","Type":"ContainerDied","Data":"870ae255fc8aa69089480c5b4f44f2d48029e57db6c300a41e2ada010df31423"} Jan 30 08:13:29 crc kubenswrapper[4870]: I0130 08:13:29.443104 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.452297 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.863429 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.864944 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.866104 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.866326 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.963893 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.964082 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.964119 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.964536 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.964583 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:13:30 crc kubenswrapper[4870]: I0130 08:13:30.964602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.010215 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.011083 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.011863 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.065920 4870 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.065969 4870 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.065980 4870 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167008 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kube-api-access\") pod \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167107 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-var-lock\") pod \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167127 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kubelet-dir\") pod \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\" (UID: \"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc\") " Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167278 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-var-lock" (OuterVolumeSpecName: "var-lock") pod "08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" (UID: "08cc9cc4-dd06-46a2-94c4-0b1977cd1adc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167334 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" (UID: "08cc9cc4-dd06-46a2-94c4-0b1977cd1adc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167858 4870 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.167897 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.173775 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" (UID: "08cc9cc4-dd06-46a2-94c4-0b1977cd1adc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.269110 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08cc9cc4-dd06-46a2-94c4-0b1977cd1adc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.462828 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"08cc9cc4-dd06-46a2-94c4-0b1977cd1adc","Type":"ContainerDied","Data":"7bf7aacf1a3cb5782a5f9385d5b6312bd1fa309375e7e58df111c48bf3bdf731"} Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.462867 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.462917 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf7aacf1a3cb5782a5f9385d5b6312bd1fa309375e7e58df111c48bf3bdf731" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.467030 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.469723 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230" exitCode=0 Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.469795 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.469808 4870 scope.go:117] "RemoveContainer" containerID="ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.480121 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.480945 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.492647 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.493004 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.502224 4870 scope.go:117] "RemoveContainer" containerID="d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.520976 4870 scope.go:117] "RemoveContainer" containerID="8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.540747 4870 scope.go:117] "RemoveContainer" containerID="8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.570545 4870 scope.go:117] "RemoveContainer" containerID="217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.580010 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f74143b77aeea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:13:28.825761514 +0000 UTC m=+247.521308623,LastTimestamp:2026-01-30 08:13:28.825761514 +0000 UTC m=+247.521308623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.591615 4870 scope.go:117] "RemoveContainer" containerID="f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.621066 4870 scope.go:117] "RemoveContainer" containerID="ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.621689 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\": container with ID starting with ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c not found: ID does not exist" containerID="ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.621743 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c"} err="failed to get container status \"ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\": rpc error: code = NotFound desc = could not find container \"ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c\": container with ID starting with ccc2d63f97db5321f9c72585cd6e562a0ca4f14963330cacfb3782e7d2dec35c not found: ID does not exist" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.621776 4870 scope.go:117] "RemoveContainer" containerID="d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.622138 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\": container with ID starting with d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88 not found: ID does not exist" containerID="d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.622176 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88"} err="failed to get container status \"d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\": rpc error: code = NotFound desc = could not find container \"d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88\": container with ID starting with d4f5020dd829f7eab4865918ab90a3e17125e207ad5a16712499dd3131536a88 not found: ID does not exist" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.622205 4870 scope.go:117] "RemoveContainer" containerID="8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.624570 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\": container with ID starting with 8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3 not found: ID does not exist" containerID="8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.624668 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3"} err="failed to get container status \"8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\": rpc error: code = NotFound desc = could not find container \"8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3\": container with ID starting with 8da738a7146edaf4ed68342ddd159850cc5a05d6ed32a6d33f88489b476a1dc3 not found: ID does not exist" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.624758 4870 scope.go:117] "RemoveContainer" containerID="8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.625122 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\": container with ID starting with 8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f not found: ID does not exist" containerID="8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.625154 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f"} err="failed to get container status \"8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\": rpc error: code = NotFound desc = could not find container \"8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f\": container with ID starting with 8b62f7006d760ccb828d749ade16e3e954ad73d4443910e839dc0da99fc7da7f not found: ID does not exist" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.625171 4870 scope.go:117] "RemoveContainer" containerID="217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.625435 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\": container with ID starting with 217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230 not found: ID does not exist" containerID="217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.625470 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230"} err="failed to get container status \"217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\": rpc error: code = NotFound desc = could not find container \"217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230\": container with ID starting with 217fcd0b4245574b2e5a6ca34fd1e2834501700b4a204fde692811209c58d230 not found: ID does not exist" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.625490 4870 scope.go:117] "RemoveContainer" containerID="f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e" Jan 30 08:13:31 crc kubenswrapper[4870]: E0130 08:13:31.625746 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\": container with ID starting with f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e not found: ID does not exist" containerID="f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e" Jan 30 08:13:31 crc kubenswrapper[4870]: I0130 08:13:31.625772 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e"} err="failed to get container status \"f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\": rpc error: code = NotFound desc = could not find container \"f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e\": container with ID starting with f53bf7dfca462f12650874f2cf41b88c4f423acdda333808c8375b1150d1b25e not found: ID does not exist" Jan 30 08:13:32 crc kubenswrapper[4870]: I0130 08:13:32.076917 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:32 crc kubenswrapper[4870]: I0130 08:13:32.077202 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:32 crc kubenswrapper[4870]: I0130 08:13:32.093032 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.054927 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.055494 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.056056 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.056505 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.056985 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:33 crc kubenswrapper[4870]: I0130 08:13:33.057039 4870 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.057510 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="200ms" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.258480 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="400ms" Jan 30 08:13:33 crc kubenswrapper[4870]: E0130 08:13:33.660078 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="800ms" Jan 30 08:13:34 crc kubenswrapper[4870]: E0130 08:13:34.462571 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="1.6s" Jan 30 08:13:36 crc kubenswrapper[4870]: E0130 08:13:36.064826 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="3.2s" Jan 30 08:13:39 crc kubenswrapper[4870]: E0130 08:13:39.266806 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" interval="6.4s" Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.064812 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.065477 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.554577 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.554655 4870 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee" exitCode=1 Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.554703 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee"} Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.555412 4870 scope.go:117] "RemoveContainer" containerID="39954930519fa4f0bd5c32bbcc70efd3789d0f7c88db456f84cb5aac9914e7ee" Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.556120 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:41 crc kubenswrapper[4870]: I0130 08:13:41.556729 4870 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:41 crc kubenswrapper[4870]: E0130 08:13:41.581434 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f74143b77aeea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:13:28.825761514 +0000 UTC m=+247.521308623,LastTimestamp:2026-01-30 08:13:28.825761514 +0000 UTC m=+247.521308623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.074764 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.081354 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.082341 4870 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.083414 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.084124 4870 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.109956 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.110019 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.110920 4870 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.111842 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:42 crc kubenswrapper[4870]: W0130 08:13:42.136555 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5d6ee682a7137fd8fd43af6de87510fbef44ab62bc710438839abc6b85902c66 WatchSource:0}: Error finding container 5d6ee682a7137fd8fd43af6de87510fbef44ab62bc710438839abc6b85902c66: Status 404 returned error can't find the container with id 5d6ee682a7137fd8fd43af6de87510fbef44ab62bc710438839abc6b85902c66 Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.562273 4870 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="801bf827ba75a2bd4f1f66ac98628a28283cc9ae17abf74ddb2b42aa68294fc2" exitCode=0 Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.562383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"801bf827ba75a2bd4f1f66ac98628a28283cc9ae17abf74ddb2b42aa68294fc2"} Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.562436 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5d6ee682a7137fd8fd43af6de87510fbef44ab62bc710438839abc6b85902c66"} Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.562740 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.562760 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.563127 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.563270 4870 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.563395 4870 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.568321 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:13:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:13:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:13:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:13:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.568615 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.569025 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.569240 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.569383 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.569428 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a4b05f9c56e0bcb68b90d3bc04c870a8ad34240b7b8e01fe0ea0c0ff8d96966"} Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.569505 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: E0130 08:13:42.569528 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.570058 4870 status_manager.go:851] "Failed to get status for pod" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:42 crc kubenswrapper[4870]: I0130 08:13:42.570341 4870 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.227:6443: connect: connection refused" Jan 30 08:13:43 crc kubenswrapper[4870]: I0130 08:13:43.590641 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"18505e124e70139bc29db2f9c0c908d32a57c36d399f2bf87b4d89e5eb54791d"} Jan 30 08:13:43 crc kubenswrapper[4870]: I0130 08:13:43.591146 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"84bfff090f314b9cf4785bf9ebd2f087ce25e90005c9bf86a319e189bfd50d2f"} Jan 30 08:13:43 crc kubenswrapper[4870]: I0130 08:13:43.591160 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2dc5dfd23b2c524d1eb3a37239b6b72ef241056263a87dbdbd456e4496a40e33"} Jan 30 08:13:43 crc kubenswrapper[4870]: I0130 08:13:43.591171 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"150630c5483ac78de27ff71355e3e33875c6d5b13a7128b1084bfd4e2280ded6"} Jan 30 08:13:44 crc kubenswrapper[4870]: I0130 08:13:44.601026 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"09329e389e6466ca3f26daf4fe54c77798e955e67860273c6afe3e689d2cded1"} Jan 30 08:13:44 crc kubenswrapper[4870]: I0130 08:13:44.601270 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:44 crc kubenswrapper[4870]: I0130 08:13:44.601494 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:44 crc kubenswrapper[4870]: I0130 08:13:44.601543 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:47 crc kubenswrapper[4870]: I0130 08:13:47.112845 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:47 crc kubenswrapper[4870]: I0130 08:13:47.113914 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:47 crc kubenswrapper[4870]: I0130 08:13:47.120962 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:48 crc kubenswrapper[4870]: I0130 08:13:48.037721 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:13:48 crc kubenswrapper[4870]: I0130 08:13:48.561467 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:13:48 crc kubenswrapper[4870]: I0130 08:13:48.561973 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 08:13:48 crc kubenswrapper[4870]: I0130 08:13:48.562019 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 08:13:49 crc kubenswrapper[4870]: I0130 08:13:49.613679 4870 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:49 crc kubenswrapper[4870]: I0130 08:13:49.640534 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:49 crc kubenswrapper[4870]: I0130 08:13:49.640572 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:49 crc kubenswrapper[4870]: I0130 08:13:49.644573 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:13:50 crc kubenswrapper[4870]: I0130 08:13:50.647358 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:50 crc kubenswrapper[4870]: I0130 08:13:50.647737 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:13:52 crc kubenswrapper[4870]: I0130 08:13:52.107702 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2893a51b-5127-4be5-aa18-e3db0e84dad1" Jan 30 08:13:58 crc kubenswrapper[4870]: I0130 08:13:58.559767 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 08:13:58 crc kubenswrapper[4870]: I0130 08:13:58.560861 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 08:13:59 crc kubenswrapper[4870]: I0130 08:13:59.261736 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 08:13:59 crc kubenswrapper[4870]: I0130 08:13:59.499626 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 08:13:59 crc kubenswrapper[4870]: I0130 08:13:59.657937 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 08:13:59 crc kubenswrapper[4870]: I0130 08:13:59.903243 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 08:14:00 crc kubenswrapper[4870]: I0130 08:14:00.390132 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:14:00 crc kubenswrapper[4870]: I0130 08:14:00.540224 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 08:14:00 crc kubenswrapper[4870]: I0130 08:14:00.643933 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 08:14:00 crc kubenswrapper[4870]: I0130 08:14:00.969636 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 08:14:00 crc kubenswrapper[4870]: I0130 08:14:00.997514 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:14:01 crc kubenswrapper[4870]: I0130 08:14:01.065330 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:14:01 crc kubenswrapper[4870]: I0130 08:14:01.156862 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 08:14:01 crc kubenswrapper[4870]: I0130 08:14:01.182400 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 08:14:01 crc kubenswrapper[4870]: I0130 08:14:01.626839 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 08:14:01 crc kubenswrapper[4870]: I0130 08:14:01.885112 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 08:14:01 crc kubenswrapper[4870]: I0130 08:14:01.960991 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.030139 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.038255 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.152666 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.248654 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.435935 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.462684 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.561188 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.562031 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.665294 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.805155 4870 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 08:14:02 crc kubenswrapper[4870]: I0130 08:14:02.849114 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.039204 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.044040 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.178753 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.189189 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.229598 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.333834 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.394645 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.450208 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.501667 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.526834 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.582804 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.596748 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.703004 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.716128 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.763801 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.791164 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 08:14:03 crc kubenswrapper[4870]: I0130 08:14:03.915916 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.028273 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.165534 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.212704 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.251188 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.306802 4870 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.314683 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.315025 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.315191 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jh9j6","openshift-marketplace/redhat-operators-85lwg","openshift-marketplace/redhat-marketplace-jqng8","openshift-marketplace/community-operators-cx2x5","openshift-marketplace/certified-operators-rk4lj"] Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.315603 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.315648 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="aad2d5d4-4cd8-4b7e-ba10-017838ecf3ff" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.315624 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cx2x5" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="registry-server" containerID="cri-o://5d7f1da1a59f0ee841deb45c6681be28192aa5d1b0765a1de1cc229f89986ccd" gracePeriod=30 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.316063 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jqng8" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="registry-server" containerID="cri-o://5086a69b5f8df7175222c9a53597ceeaa092692fca7dd2ea0dc59c15c50cec17" gracePeriod=30 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.316109 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" containerID="cri-o://97e57ef1c7ce66b357f1ed6e1f8847cdeaccd06e556466aa82594a1548b78355" gracePeriod=30 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.316204 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-85lwg" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="registry-server" containerID="cri-o://bbc787722aa3ad5d86b9358f4c93fb7295e3213baf6f9aa990b2876df2f315f2" gracePeriod=30 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.316396 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rk4lj" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="registry-server" containerID="cri-o://9c925d71b4dfdc55925a74993dfa3447a8c069656a12a2daf0cbfcade11ab1ed" gracePeriod=30 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.336084 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.353063 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.353016566 podStartE2EDuration="15.353016566s" podCreationTimestamp="2026-01-30 08:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:14:04.350082414 +0000 UTC m=+283.045629613" watchObservedRunningTime="2026-01-30 08:14:04.353016566 +0000 UTC m=+283.048563675" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.360079 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.550710 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.571455 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.637906 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.733289 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.743951 4870 generic.go:334] "Generic (PLEG): container finished" podID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerID="5086a69b5f8df7175222c9a53597ceeaa092692fca7dd2ea0dc59c15c50cec17" exitCode=0 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.744033 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqng8" event={"ID":"56cb5ce8-da4f-4c24-9805-18a91b316bcd","Type":"ContainerDied","Data":"5086a69b5f8df7175222c9a53597ceeaa092692fca7dd2ea0dc59c15c50cec17"} Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.746387 4870 generic.go:334] "Generic (PLEG): container finished" podID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerID="bbc787722aa3ad5d86b9358f4c93fb7295e3213baf6f9aa990b2876df2f315f2" exitCode=0 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.746445 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerDied","Data":"bbc787722aa3ad5d86b9358f4c93fb7295e3213baf6f9aa990b2876df2f315f2"} Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.748329 4870 generic.go:334] "Generic (PLEG): container finished" podID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerID="97e57ef1c7ce66b357f1ed6e1f8847cdeaccd06e556466aa82594a1548b78355" exitCode=0 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.748374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" event={"ID":"8ede517d-773d-4f0b-8c0a-42ae13359f95","Type":"ContainerDied","Data":"97e57ef1c7ce66b357f1ed6e1f8847cdeaccd06e556466aa82594a1548b78355"} Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.750432 4870 generic.go:334] "Generic (PLEG): container finished" podID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerID="9c925d71b4dfdc55925a74993dfa3447a8c069656a12a2daf0cbfcade11ab1ed" exitCode=0 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.750489 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4lj" event={"ID":"ba2950a4-e1b9-45a9-9980-1b4169e0fb16","Type":"ContainerDied","Data":"9c925d71b4dfdc55925a74993dfa3447a8c069656a12a2daf0cbfcade11ab1ed"} Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.752172 4870 generic.go:334] "Generic (PLEG): container finished" podID="258d3e35-5580-4108-889c-9d5d2f80c810" containerID="5d7f1da1a59f0ee841deb45c6681be28192aa5d1b0765a1de1cc229f89986ccd" exitCode=0 Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.753061 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx2x5" event={"ID":"258d3e35-5580-4108-889c-9d5d2f80c810","Type":"ContainerDied","Data":"5d7f1da1a59f0ee841deb45c6681be28192aa5d1b0765a1de1cc229f89986ccd"} Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.828727 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.854241 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.865226 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.874471 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.955404 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.968779 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.974719 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.981328 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:14:04 crc kubenswrapper[4870]: I0130 08:14:04.993524 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.011787 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-utilities\") pod \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.011909 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6vk\" (UniqueName: \"kubernetes.io/projected/56cb5ce8-da4f-4c24-9805-18a91b316bcd-kube-api-access-nz6vk\") pod \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.011985 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-catalog-content\") pod \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\" (UID: \"56cb5ce8-da4f-4c24-9805-18a91b316bcd\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.013856 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-utilities" (OuterVolumeSpecName: "utilities") pod "56cb5ce8-da4f-4c24-9805-18a91b316bcd" (UID: "56cb5ce8-da4f-4c24-9805-18a91b316bcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.021045 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56cb5ce8-da4f-4c24-9805-18a91b316bcd-kube-api-access-nz6vk" (OuterVolumeSpecName: "kube-api-access-nz6vk") pod "56cb5ce8-da4f-4c24-9805-18a91b316bcd" (UID: "56cb5ce8-da4f-4c24-9805-18a91b316bcd"). InnerVolumeSpecName "kube-api-access-nz6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.037098 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56cb5ce8-da4f-4c24-9805-18a91b316bcd" (UID: "56cb5ce8-da4f-4c24-9805-18a91b316bcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.073743 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.107700 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.113914 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44g22\" (UniqueName: \"kubernetes.io/projected/258d3e35-5580-4108-889c-9d5d2f80c810-kube-api-access-44g22\") pod \"258d3e35-5580-4108-889c-9d5d2f80c810\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114007 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-catalog-content\") pod \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114384 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-utilities\") pod \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114436 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwtm\" (UniqueName: \"kubernetes.io/projected/8ede517d-773d-4f0b-8c0a-42ae13359f95-kube-api-access-qmwtm\") pod \"8ede517d-773d-4f0b-8c0a-42ae13359f95\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114537 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-catalog-content\") pod \"258d3e35-5580-4108-889c-9d5d2f80c810\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114570 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-operator-metrics\") pod \"8ede517d-773d-4f0b-8c0a-42ae13359f95\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114614 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg24l\" (UniqueName: \"kubernetes.io/projected/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-kube-api-access-rg24l\") pod \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\" (UID: \"ba2950a4-e1b9-45a9-9980-1b4169e0fb16\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114641 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drv8z\" (UniqueName: \"kubernetes.io/projected/1d50529a-bc06-49a9-a5bf-64e91e8734c2-kube-api-access-drv8z\") pod \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114690 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-trusted-ca\") pod \"8ede517d-773d-4f0b-8c0a-42ae13359f95\" (UID: \"8ede517d-773d-4f0b-8c0a-42ae13359f95\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114709 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-utilities\") pod \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.114738 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-utilities\") pod \"258d3e35-5580-4108-889c-9d5d2f80c810\" (UID: \"258d3e35-5580-4108-889c-9d5d2f80c810\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.115927 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-utilities" (OuterVolumeSpecName: "utilities") pod "1d50529a-bc06-49a9-a5bf-64e91e8734c2" (UID: "1d50529a-bc06-49a9-a5bf-64e91e8734c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.116196 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8ede517d-773d-4f0b-8c0a-42ae13359f95" (UID: "8ede517d-773d-4f0b-8c0a-42ae13359f95"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.116438 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-utilities" (OuterVolumeSpecName: "utilities") pod "ba2950a4-e1b9-45a9-9980-1b4169e0fb16" (UID: "ba2950a4-e1b9-45a9-9980-1b4169e0fb16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.116457 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-catalog-content\") pod \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\" (UID: \"1d50529a-bc06-49a9-a5bf-64e91e8734c2\") " Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.116606 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-utilities" (OuterVolumeSpecName: "utilities") pod "258d3e35-5580-4108-889c-9d5d2f80c810" (UID: "258d3e35-5580-4108-889c-9d5d2f80c810"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.116703 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258d3e35-5580-4108-889c-9d5d2f80c810-kube-api-access-44g22" (OuterVolumeSpecName: "kube-api-access-44g22") pod "258d3e35-5580-4108-889c-9d5d2f80c810" (UID: "258d3e35-5580-4108-889c-9d5d2f80c810"). InnerVolumeSpecName "kube-api-access-44g22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117657 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117689 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117713 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117727 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117740 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44g22\" (UniqueName: \"kubernetes.io/projected/258d3e35-5580-4108-889c-9d5d2f80c810-kube-api-access-44g22\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117752 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117764 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6vk\" (UniqueName: \"kubernetes.io/projected/56cb5ce8-da4f-4c24-9805-18a91b316bcd-kube-api-access-nz6vk\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.117776 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56cb5ce8-da4f-4c24-9805-18a91b316bcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.121776 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8ede517d-773d-4f0b-8c0a-42ae13359f95" (UID: "8ede517d-773d-4f0b-8c0a-42ae13359f95"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.121853 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d50529a-bc06-49a9-a5bf-64e91e8734c2-kube-api-access-drv8z" (OuterVolumeSpecName: "kube-api-access-drv8z") pod "1d50529a-bc06-49a9-a5bf-64e91e8734c2" (UID: "1d50529a-bc06-49a9-a5bf-64e91e8734c2"). InnerVolumeSpecName "kube-api-access-drv8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.121910 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ede517d-773d-4f0b-8c0a-42ae13359f95-kube-api-access-qmwtm" (OuterVolumeSpecName: "kube-api-access-qmwtm") pod "8ede517d-773d-4f0b-8c0a-42ae13359f95" (UID: "8ede517d-773d-4f0b-8c0a-42ae13359f95"). InnerVolumeSpecName "kube-api-access-qmwtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.122402 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-kube-api-access-rg24l" (OuterVolumeSpecName: "kube-api-access-rg24l") pod "ba2950a4-e1b9-45a9-9980-1b4169e0fb16" (UID: "ba2950a4-e1b9-45a9-9980-1b4169e0fb16"). InnerVolumeSpecName "kube-api-access-rg24l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.161744 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.182941 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "258d3e35-5580-4108-889c-9d5d2f80c810" (UID: "258d3e35-5580-4108-889c-9d5d2f80c810"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.187205 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.192523 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba2950a4-e1b9-45a9-9980-1b4169e0fb16" (UID: "ba2950a4-e1b9-45a9-9980-1b4169e0fb16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.219082 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258d3e35-5580-4108-889c-9d5d2f80c810-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.219117 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ede517d-773d-4f0b-8c0a-42ae13359f95-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.219133 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg24l\" (UniqueName: \"kubernetes.io/projected/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-kube-api-access-rg24l\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.219146 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drv8z\" (UniqueName: \"kubernetes.io/projected/1d50529a-bc06-49a9-a5bf-64e91e8734c2-kube-api-access-drv8z\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.219162 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2950a4-e1b9-45a9-9980-1b4169e0fb16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.219175 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwtm\" (UniqueName: \"kubernetes.io/projected/8ede517d-773d-4f0b-8c0a-42ae13359f95-kube-api-access-qmwtm\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.240689 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.261242 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d50529a-bc06-49a9-a5bf-64e91e8734c2" (UID: "1d50529a-bc06-49a9-a5bf-64e91e8734c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.321103 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d50529a-bc06-49a9-a5bf-64e91e8734c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.325905 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.390138 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.400464 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.499373 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.556060 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.559782 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.584696 4870 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.644384 4870 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.719224 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.733633 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.761557 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4lj" event={"ID":"ba2950a4-e1b9-45a9-9980-1b4169e0fb16","Type":"ContainerDied","Data":"b4deb94680d10a0e49b737adc1e5d0d479b58878615ce9ba8009bd204fb58e39"} Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.761678 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4lj" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.761734 4870 scope.go:117] "RemoveContainer" containerID="9c925d71b4dfdc55925a74993dfa3447a8c069656a12a2daf0cbfcade11ab1ed" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.764419 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx2x5" event={"ID":"258d3e35-5580-4108-889c-9d5d2f80c810","Type":"ContainerDied","Data":"ea85190d876bcbca144726c237a14b6d31ba3248e8f165a1e622d666e72b6022"} Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.764549 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx2x5" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.768539 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqng8" event={"ID":"56cb5ce8-da4f-4c24-9805-18a91b316bcd","Type":"ContainerDied","Data":"5318a5759e8a4ecffb11be37d9689df0b960dc674f99fd5d3cb764e4f3066de3"} Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.768557 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqng8" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.771855 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85lwg" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.771869 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85lwg" event={"ID":"1d50529a-bc06-49a9-a5bf-64e91e8734c2","Type":"ContainerDied","Data":"1b14874ab64bd9943b3954bf834f4ae30ab6a234601d5bd7fe08c6631f1c0819"} Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.774238 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" event={"ID":"8ede517d-773d-4f0b-8c0a-42ae13359f95","Type":"ContainerDied","Data":"3b948b615fba724f1687e73e5fcca06ca297443c072f9ebaf1a3471eb522792b"} Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.774938 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jh9j6" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.776778 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.788919 4870 scope.go:117] "RemoveContainer" containerID="4e94e4129ecab37de0297dde4dc86e9ac30e8fda6a11f59af65a8c199b125d87" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.825018 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk4lj"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.835957 4870 scope.go:117] "RemoveContainer" containerID="2e15cf3e43d60efa400786600f10aabddcac1a402cf20155c96332c4d505ad73" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.837856 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rk4lj"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.842364 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqng8"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.845494 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqng8"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.849110 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jh9j6"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.852218 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jh9j6"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.857536 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cx2x5"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.860379 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cx2x5"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.864180 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85lwg"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.866865 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-85lwg"] Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.868365 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.879655 4870 scope.go:117] "RemoveContainer" containerID="5d7f1da1a59f0ee841deb45c6681be28192aa5d1b0765a1de1cc229f89986ccd" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.895025 4870 scope.go:117] "RemoveContainer" containerID="31bdc406d04a8518a48f85291f438714500a3199ef4565a4e1bcc218ea393cac" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.903469 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.914475 4870 scope.go:117] "RemoveContainer" containerID="b8e1ab4ce4d07cf81dd3964239182751d6d8a8cb595e0cabe44b1efd32e0f612" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.930182 4870 scope.go:117] "RemoveContainer" containerID="5086a69b5f8df7175222c9a53597ceeaa092692fca7dd2ea0dc59c15c50cec17" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.946907 4870 scope.go:117] "RemoveContainer" containerID="6734abf7e123160f7f9ec15e63bcacb2803b7e9b5f597cb9ce9439f6abad0e28" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.963782 4870 scope.go:117] "RemoveContainer" containerID="8a3a4ecde2801a20f3bb4ccdc68bab1d46b831e5569a15eb1e5876330bbb7d42" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.978977 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.982536 4870 scope.go:117] "RemoveContainer" containerID="bbc787722aa3ad5d86b9358f4c93fb7295e3213baf6f9aa990b2876df2f315f2" Jan 30 08:14:05 crc kubenswrapper[4870]: I0130 08:14:05.997480 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.002164 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.002463 4870 scope.go:117] "RemoveContainer" containerID="d0dc443c5c9b20693d4448270af7993e64d959e03cda7b880c0de95b2ee5007b" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.006861 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.010393 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.022665 4870 scope.go:117] "RemoveContainer" containerID="f4984448372f3c99bd2eb627d2f6a37eee0cab48c315336c3d5192e15f6bb85e" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.042634 4870 scope.go:117] "RemoveContainer" containerID="97e57ef1c7ce66b357f1ed6e1f8847cdeaccd06e556466aa82594a1548b78355" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.071107 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.085414 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" path="/var/lib/kubelet/pods/1d50529a-bc06-49a9-a5bf-64e91e8734c2/volumes" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.086362 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" path="/var/lib/kubelet/pods/258d3e35-5580-4108-889c-9d5d2f80c810/volumes" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.087439 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" path="/var/lib/kubelet/pods/56cb5ce8-da4f-4c24-9805-18a91b316bcd/volumes" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.089063 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" path="/var/lib/kubelet/pods/8ede517d-773d-4f0b-8c0a-42ae13359f95/volumes" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.089722 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" path="/var/lib/kubelet/pods/ba2950a4-e1b9-45a9-9980-1b4169e0fb16/volumes" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.392300 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.405715 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.406529 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.422279 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.428067 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.581244 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.588988 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.617912 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.686250 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.693623 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.781816 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.895503 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:14:06 crc kubenswrapper[4870]: I0130 08:14:06.987072 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.032009 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.203520 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.243966 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.248192 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.369258 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.375825 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.486542 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.527764 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.558727 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.566326 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.598417 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.626679 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.679205 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.713720 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.757733 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.758162 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.814141 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.814294 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.908333 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 08:14:07 crc kubenswrapper[4870]: I0130 08:14:07.909528 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.019513 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.070413 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.086033 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.139200 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.157207 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.271687 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.293353 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.368325 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.420259 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.472279 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.548140 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.548855 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.556787 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.561795 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.567596 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.569744 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.633690 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.710603 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.742782 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.811040 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.851274 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.886840 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.896910 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.991182 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 08:14:08 crc kubenswrapper[4870]: I0130 08:14:08.993505 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.015527 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.100843 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.145453 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.154750 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.164253 4870 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.218480 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.222783 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.224397 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.245026 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.260578 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.304110 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.422927 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.445912 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.457032 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.481517 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.495059 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.513020 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.516442 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.573017 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.675066 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.742026 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.755668 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.774990 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.795166 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.804171 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.809183 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.835524 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.844676 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.845673 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.943226 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.952441 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 08:14:09 crc kubenswrapper[4870]: I0130 08:14:09.974653 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.107698 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.181019 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.187170 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.208296 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.225964 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.344245 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.455202 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.671107 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.677387 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.701500 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.741590 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.760709 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.893126 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.915120 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.919055 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 08:14:10 crc kubenswrapper[4870]: I0130 08:14:10.980938 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.019897 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.034734 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.086201 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.102831 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.274003 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.350763 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.406409 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.467994 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.523692 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.614173 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.707807 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.815743 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.844164 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.854246 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 08:14:11 crc kubenswrapper[4870]: I0130 08:14:11.957773 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.022526 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.030047 4870 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.030373 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1" gracePeriod=5 Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.245839 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.275096 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.363409 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.378826 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.470396 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.696902 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.727072 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.749812 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.754244 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.769999 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.834081 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.838705 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.947928 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 08:14:12 crc kubenswrapper[4870]: I0130 08:14:12.973988 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.127900 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.183471 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.390623 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.430365 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.457805 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.480616 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.583955 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.612825 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.623141 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.710619 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:14:13 crc kubenswrapper[4870]: I0130 08:14:13.988606 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.037597 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.055215 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.137723 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.207852 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.456835 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.471893 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.484833 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.653853 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.755792 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.902894 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:14:14 crc kubenswrapper[4870]: I0130 08:14:14.904578 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 08:14:15 crc kubenswrapper[4870]: I0130 08:14:15.199688 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 08:14:15 crc kubenswrapper[4870]: I0130 08:14:15.350797 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 08:14:15 crc kubenswrapper[4870]: I0130 08:14:15.527667 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 08:14:16 crc kubenswrapper[4870]: I0130 08:14:16.233913 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 08:14:16 crc kubenswrapper[4870]: I0130 08:14:16.385611 4870 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:14:16 crc kubenswrapper[4870]: I0130 08:14:16.512727 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 08:14:16 crc kubenswrapper[4870]: I0130 08:14:16.735202 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:14:16 crc kubenswrapper[4870]: I0130 08:14:16.777444 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.630806 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.631413 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.732723 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.732921 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.732926 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.732961 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733020 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733177 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733227 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733338 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733418 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733692 4870 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733718 4870 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733733 4870 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.733746 4870 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.742317 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.834902 4870 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.869357 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.869416 4870 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1" exitCode=137 Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.869465 4870 scope.go:117] "RemoveContainer" containerID="c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.869630 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.888975 4870 scope.go:117] "RemoveContainer" containerID="c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1" Jan 30 08:14:17 crc kubenswrapper[4870]: E0130 08:14:17.889637 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1\": container with ID starting with c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1 not found: ID does not exist" containerID="c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1" Jan 30 08:14:17 crc kubenswrapper[4870]: I0130 08:14:17.889709 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1"} err="failed to get container status \"c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1\": rpc error: code = NotFound desc = could not find container \"c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1\": container with ID starting with c0c72089a10d44666fb67e168c5b95e04bd873513b7b27b2f406fcf4e49ed8d1 not found: ID does not exist" Jan 30 08:14:18 crc kubenswrapper[4870]: I0130 08:14:18.082861 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 08:14:18 crc kubenswrapper[4870]: I0130 08:14:18.341732 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.033249 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkhzd"] Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034045 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" containerName="installer" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034062 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" containerName="installer" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034073 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034082 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034096 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034104 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034112 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034118 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034128 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034134 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034141 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034148 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034158 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034164 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034174 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034182 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034192 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034198 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034237 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034244 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034258 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034264 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034274 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034281 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034287 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034293 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034300 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034308 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="extract-content" Jan 30 08:14:19 crc kubenswrapper[4870]: E0130 08:14:19.034316 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034323 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="extract-utilities" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034413 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d50529a-bc06-49a9-a5bf-64e91e8734c2" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034425 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cb5ce8-da4f-4c24-9805-18a91b316bcd" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034433 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="258d3e35-5580-4108-889c-9d5d2f80c810" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034442 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc9cc4-dd06-46a2-94c4-0b1977cd1adc" containerName="installer" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034451 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2950a4-e1b9-45a9-9980-1b4169e0fb16" containerName="registry-server" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034460 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ede517d-773d-4f0b-8c0a-42ae13359f95" containerName="marketplace-operator" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034468 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.034985 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.040210 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.041806 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.042043 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.056386 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.070127 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.086909 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkhzd"] Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.170032 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83d46dd9-5ab7-44c9-b032-1241911b6d82-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.170100 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthr6\" (UniqueName: \"kubernetes.io/projected/83d46dd9-5ab7-44c9-b032-1241911b6d82-kube-api-access-gthr6\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.170152 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83d46dd9-5ab7-44c9-b032-1241911b6d82-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.271795 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthr6\" (UniqueName: \"kubernetes.io/projected/83d46dd9-5ab7-44c9-b032-1241911b6d82-kube-api-access-gthr6\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.271857 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83d46dd9-5ab7-44c9-b032-1241911b6d82-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.271935 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83d46dd9-5ab7-44c9-b032-1241911b6d82-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.273518 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83d46dd9-5ab7-44c9-b032-1241911b6d82-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.277298 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83d46dd9-5ab7-44c9-b032-1241911b6d82-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.305928 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthr6\" (UniqueName: \"kubernetes.io/projected/83d46dd9-5ab7-44c9-b032-1241911b6d82-kube-api-access-gthr6\") pod \"marketplace-operator-79b997595-vkhzd\" (UID: \"83d46dd9-5ab7-44c9-b032-1241911b6d82\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.352940 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.840526 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkhzd"] Jan 30 08:14:19 crc kubenswrapper[4870]: W0130 08:14:19.847284 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d46dd9_5ab7_44c9_b032_1241911b6d82.slice/crio-fb38729d476b3402e5fe7c9ab67d74ba742cb40594d8331879bd3cf0e116ee78 WatchSource:0}: Error finding container fb38729d476b3402e5fe7c9ab67d74ba742cb40594d8331879bd3cf0e116ee78: Status 404 returned error can't find the container with id fb38729d476b3402e5fe7c9ab67d74ba742cb40594d8331879bd3cf0e116ee78 Jan 30 08:14:19 crc kubenswrapper[4870]: I0130 08:14:19.884638 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" event={"ID":"83d46dd9-5ab7-44c9-b032-1241911b6d82","Type":"ContainerStarted","Data":"fb38729d476b3402e5fe7c9ab67d74ba742cb40594d8331879bd3cf0e116ee78"} Jan 30 08:14:20 crc kubenswrapper[4870]: I0130 08:14:20.892253 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" event={"ID":"83d46dd9-5ab7-44c9-b032-1241911b6d82","Type":"ContainerStarted","Data":"fe96c7d47a996df0fd9fed7c61d5a9257f0f30c59f02d890063b39690be17911"} Jan 30 08:14:20 crc kubenswrapper[4870]: I0130 08:14:20.892569 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:20 crc kubenswrapper[4870]: I0130 08:14:20.897743 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" Jan 30 08:14:20 crc kubenswrapper[4870]: I0130 08:14:20.913464 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vkhzd" podStartSLOduration=1.913445574 podStartE2EDuration="1.913445574s" podCreationTimestamp="2026-01-30 08:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:14:20.912895557 +0000 UTC m=+299.608442656" watchObservedRunningTime="2026-01-30 08:14:20.913445574 +0000 UTC m=+299.608992683" Jan 30 08:14:21 crc kubenswrapper[4870]: I0130 08:14:21.856660 4870 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.190912 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc"] Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.192224 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.196429 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.197556 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.206182 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc"] Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.350318 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-secret-volume\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.350391 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2tx\" (UniqueName: \"kubernetes.io/projected/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-kube-api-access-7d2tx\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.350448 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-config-volume\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.451758 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-secret-volume\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.451810 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2tx\" (UniqueName: \"kubernetes.io/projected/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-kube-api-access-7d2tx\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.451859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-config-volume\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.452799 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-config-volume\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.459359 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-secret-volume\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.469141 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2tx\" (UniqueName: \"kubernetes.io/projected/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-kube-api-access-7d2tx\") pod \"collect-profiles-29496015-h7vrc\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:00 crc kubenswrapper[4870]: I0130 08:15:00.527093 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:01 crc kubenswrapper[4870]: I0130 08:15:01.013404 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc"] Jan 30 08:15:01 crc kubenswrapper[4870]: I0130 08:15:01.182985 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" event={"ID":"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409","Type":"ContainerStarted","Data":"8d0d384940320938f76b4606a7945e7e85fc7299d14ce535b306384ff5a56415"} Jan 30 08:15:02 crc kubenswrapper[4870]: I0130 08:15:02.192625 4870 generic.go:334] "Generic (PLEG): container finished" podID="84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" containerID="906fa4603bfe71976f941c25c726c6a5f3b1b9c0bede621580c2910f359fd6f2" exitCode=0 Jan 30 08:15:02 crc kubenswrapper[4870]: I0130 08:15:02.192707 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" event={"ID":"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409","Type":"ContainerDied","Data":"906fa4603bfe71976f941c25c726c6a5f3b1b9c0bede621580c2910f359fd6f2"} Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.508612 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.550637 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d2tx\" (UniqueName: \"kubernetes.io/projected/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-kube-api-access-7d2tx\") pod \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.550735 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-secret-volume\") pod \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.550849 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-config-volume\") pod \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\" (UID: \"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409\") " Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.551966 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-config-volume" (OuterVolumeSpecName: "config-volume") pod "84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" (UID: "84ecc6b5-f0f3-40b1-ba86-24eabdbdc409"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.560104 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" (UID: "84ecc6b5-f0f3-40b1-ba86-24eabdbdc409"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.560129 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-kube-api-access-7d2tx" (OuterVolumeSpecName: "kube-api-access-7d2tx") pod "84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" (UID: "84ecc6b5-f0f3-40b1-ba86-24eabdbdc409"). InnerVolumeSpecName "kube-api-access-7d2tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.652997 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.653058 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d2tx\" (UniqueName: \"kubernetes.io/projected/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-kube-api-access-7d2tx\") on node \"crc\" DevicePath \"\"" Jan 30 08:15:03 crc kubenswrapper[4870]: I0130 08:15:03.653078 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.215964 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" event={"ID":"84ecc6b5-f0f3-40b1-ba86-24eabdbdc409","Type":"ContainerDied","Data":"8d0d384940320938f76b4606a7945e7e85fc7299d14ce535b306384ff5a56415"} Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.216400 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0d384940320938f76b4606a7945e7e85fc7299d14ce535b306384ff5a56415" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.216066 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.496813 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glxrr"] Jan 30 08:15:04 crc kubenswrapper[4870]: E0130 08:15:04.497171 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" containerName="collect-profiles" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.497194 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" containerName="collect-profiles" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.497426 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" containerName="collect-profiles" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.498768 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.502932 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.567582 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74668\" (UniqueName: \"kubernetes.io/projected/b1839882-74e1-4c94-9d83-849d10c41089-kube-api-access-74668\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.567692 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1839882-74e1-4c94-9d83-849d10c41089-catalog-content\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.568004 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1839882-74e1-4c94-9d83-849d10c41089-utilities\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.570766 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glxrr"] Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.668984 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1839882-74e1-4c94-9d83-849d10c41089-catalog-content\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.669059 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1839882-74e1-4c94-9d83-849d10c41089-utilities\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.669137 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74668\" (UniqueName: \"kubernetes.io/projected/b1839882-74e1-4c94-9d83-849d10c41089-kube-api-access-74668\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.669944 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1839882-74e1-4c94-9d83-849d10c41089-utilities\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.670000 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1839882-74e1-4c94-9d83-849d10c41089-catalog-content\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.681976 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8dmqx"] Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.683310 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.685915 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.694788 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74668\" (UniqueName: \"kubernetes.io/projected/b1839882-74e1-4c94-9d83-849d10c41089-kube-api-access-74668\") pod \"redhat-marketplace-glxrr\" (UID: \"b1839882-74e1-4c94-9d83-849d10c41089\") " pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.703389 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dmqx"] Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.770034 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvqn\" (UniqueName: \"kubernetes.io/projected/71b77216-d7c7-4a69-8596-e64fd99129c6-kube-api-access-4rvqn\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.770078 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b77216-d7c7-4a69-8596-e64fd99129c6-catalog-content\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.770131 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b77216-d7c7-4a69-8596-e64fd99129c6-utilities\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.825262 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.871547 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b77216-d7c7-4a69-8596-e64fd99129c6-utilities\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.871710 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rvqn\" (UniqueName: \"kubernetes.io/projected/71b77216-d7c7-4a69-8596-e64fd99129c6-kube-api-access-4rvqn\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.871761 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b77216-d7c7-4a69-8596-e64fd99129c6-catalog-content\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.872382 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b77216-d7c7-4a69-8596-e64fd99129c6-utilities\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.872716 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b77216-d7c7-4a69-8596-e64fd99129c6-catalog-content\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:04 crc kubenswrapper[4870]: I0130 08:15:04.899220 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rvqn\" (UniqueName: \"kubernetes.io/projected/71b77216-d7c7-4a69-8596-e64fd99129c6-kube-api-access-4rvqn\") pod \"redhat-operators-8dmqx\" (UID: \"71b77216-d7c7-4a69-8596-e64fd99129c6\") " pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:05 crc kubenswrapper[4870]: I0130 08:15:05.039711 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:05 crc kubenswrapper[4870]: I0130 08:15:05.116274 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glxrr"] Jan 30 08:15:05 crc kubenswrapper[4870]: W0130 08:15:05.127194 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1839882_74e1_4c94_9d83_849d10c41089.slice/crio-5cfbdd285fb5a1b266382a405cf50ff0fb9a35f550f2ebe363c8d626d46c03f9 WatchSource:0}: Error finding container 5cfbdd285fb5a1b266382a405cf50ff0fb9a35f550f2ebe363c8d626d46c03f9: Status 404 returned error can't find the container with id 5cfbdd285fb5a1b266382a405cf50ff0fb9a35f550f2ebe363c8d626d46c03f9 Jan 30 08:15:05 crc kubenswrapper[4870]: I0130 08:15:05.229015 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glxrr" event={"ID":"b1839882-74e1-4c94-9d83-849d10c41089","Type":"ContainerStarted","Data":"5cfbdd285fb5a1b266382a405cf50ff0fb9a35f550f2ebe363c8d626d46c03f9"} Jan 30 08:15:05 crc kubenswrapper[4870]: I0130 08:15:05.448470 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dmqx"] Jan 30 08:15:05 crc kubenswrapper[4870]: W0130 08:15:05.450472 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b77216_d7c7_4a69_8596_e64fd99129c6.slice/crio-76ce4bf2294f97bc51fdcf08978665956a8ea8777d7c21af9eb40ef33d40e3f3 WatchSource:0}: Error finding container 76ce4bf2294f97bc51fdcf08978665956a8ea8777d7c21af9eb40ef33d40e3f3: Status 404 returned error can't find the container with id 76ce4bf2294f97bc51fdcf08978665956a8ea8777d7c21af9eb40ef33d40e3f3 Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.250791 4870 generic.go:334] "Generic (PLEG): container finished" podID="b1839882-74e1-4c94-9d83-849d10c41089" containerID="0fb6f743286c6bf6e84f87f46aac4248d0b275d919c7a9ed98a9102f025aaeae" exitCode=0 Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.251164 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glxrr" event={"ID":"b1839882-74e1-4c94-9d83-849d10c41089","Type":"ContainerDied","Data":"0fb6f743286c6bf6e84f87f46aac4248d0b275d919c7a9ed98a9102f025aaeae"} Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.257777 4870 generic.go:334] "Generic (PLEG): container finished" podID="71b77216-d7c7-4a69-8596-e64fd99129c6" containerID="f7558377abf46275ac5cf8b97589797380793e1e66c29e69aa698b670a7ac33c" exitCode=0 Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.257898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dmqx" event={"ID":"71b77216-d7c7-4a69-8596-e64fd99129c6","Type":"ContainerDied","Data":"f7558377abf46275ac5cf8b97589797380793e1e66c29e69aa698b670a7ac33c"} Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.257958 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dmqx" event={"ID":"71b77216-d7c7-4a69-8596-e64fd99129c6","Type":"ContainerStarted","Data":"76ce4bf2294f97bc51fdcf08978665956a8ea8777d7c21af9eb40ef33d40e3f3"} Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.885317 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-whfhw"] Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.886490 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.893518 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:15:06 crc kubenswrapper[4870]: I0130 08:15:06.902818 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whfhw"] Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.009799 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80eb92-6881-4e69-8ca2-050d32254eb7-utilities\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.009973 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80eb92-6881-4e69-8ca2-050d32254eb7-catalog-content\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.010079 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbkb\" (UniqueName: \"kubernetes.io/projected/ea80eb92-6881-4e69-8ca2-050d32254eb7-kube-api-access-gzbkb\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.081043 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqxgq"] Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.082175 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.084663 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.090314 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqxgq"] Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.111792 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbkb\" (UniqueName: \"kubernetes.io/projected/ea80eb92-6881-4e69-8ca2-050d32254eb7-kube-api-access-gzbkb\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.111865 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80eb92-6881-4e69-8ca2-050d32254eb7-utilities\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.111926 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80eb92-6881-4e69-8ca2-050d32254eb7-catalog-content\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.112447 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80eb92-6881-4e69-8ca2-050d32254eb7-catalog-content\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.112921 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80eb92-6881-4e69-8ca2-050d32254eb7-utilities\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.133253 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbkb\" (UniqueName: \"kubernetes.io/projected/ea80eb92-6881-4e69-8ca2-050d32254eb7-kube-api-access-gzbkb\") pod \"certified-operators-whfhw\" (UID: \"ea80eb92-6881-4e69-8ca2-050d32254eb7\") " pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.213407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b3d065-5057-49c1-be84-7880d7d4d619-utilities\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.213473 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtwk\" (UniqueName: \"kubernetes.io/projected/d7b3d065-5057-49c1-be84-7880d7d4d619-kube-api-access-bqtwk\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.213495 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b3d065-5057-49c1-be84-7880d7d4d619-catalog-content\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.215648 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.272087 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dmqx" event={"ID":"71b77216-d7c7-4a69-8596-e64fd99129c6","Type":"ContainerStarted","Data":"16f0185709325c50e5b32b57ecacced21f287c87b7c1519e6c80bab8c73d585a"} Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.320010 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b3d065-5057-49c1-be84-7880d7d4d619-utilities\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.320098 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtwk\" (UniqueName: \"kubernetes.io/projected/d7b3d065-5057-49c1-be84-7880d7d4d619-kube-api-access-bqtwk\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.320121 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b3d065-5057-49c1-be84-7880d7d4d619-catalog-content\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.320681 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b3d065-5057-49c1-be84-7880d7d4d619-catalog-content\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.320983 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b3d065-5057-49c1-be84-7880d7d4d619-utilities\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.343449 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtwk\" (UniqueName: \"kubernetes.io/projected/d7b3d065-5057-49c1-be84-7880d7d4d619-kube-api-access-bqtwk\") pod \"community-operators-mqxgq\" (UID: \"d7b3d065-5057-49c1-be84-7880d7d4d619\") " pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.405394 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.630683 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whfhw"] Jan 30 08:15:07 crc kubenswrapper[4870]: I0130 08:15:07.788987 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqxgq"] Jan 30 08:15:07 crc kubenswrapper[4870]: W0130 08:15:07.854337 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b3d065_5057_49c1_be84_7880d7d4d619.slice/crio-9bc7725a1aac8f3ce2d5c5ec8dbf24934b3bfdd442849e0ac029db29662fc649 WatchSource:0}: Error finding container 9bc7725a1aac8f3ce2d5c5ec8dbf24934b3bfdd442849e0ac029db29662fc649: Status 404 returned error can't find the container with id 9bc7725a1aac8f3ce2d5c5ec8dbf24934b3bfdd442849e0ac029db29662fc649 Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.287286 4870 generic.go:334] "Generic (PLEG): container finished" podID="71b77216-d7c7-4a69-8596-e64fd99129c6" containerID="16f0185709325c50e5b32b57ecacced21f287c87b7c1519e6c80bab8c73d585a" exitCode=0 Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.287362 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dmqx" event={"ID":"71b77216-d7c7-4a69-8596-e64fd99129c6","Type":"ContainerDied","Data":"16f0185709325c50e5b32b57ecacced21f287c87b7c1519e6c80bab8c73d585a"} Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.289658 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7b3d065-5057-49c1-be84-7880d7d4d619" containerID="53ccff46255318b58b07d04b23d8f25bcd2e7063e9cb3336eb3d8abf6464ba57" exitCode=0 Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.289743 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqxgq" event={"ID":"d7b3d065-5057-49c1-be84-7880d7d4d619","Type":"ContainerDied","Data":"53ccff46255318b58b07d04b23d8f25bcd2e7063e9cb3336eb3d8abf6464ba57"} Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.289781 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqxgq" event={"ID":"d7b3d065-5057-49c1-be84-7880d7d4d619","Type":"ContainerStarted","Data":"9bc7725a1aac8f3ce2d5c5ec8dbf24934b3bfdd442849e0ac029db29662fc649"} Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.293623 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea80eb92-6881-4e69-8ca2-050d32254eb7" containerID="4f4db8280ca4d0135958943e8472d2d7a5a94788e4391049ca7cb7386c1ecee3" exitCode=0 Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.293862 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whfhw" event={"ID":"ea80eb92-6881-4e69-8ca2-050d32254eb7","Type":"ContainerDied","Data":"4f4db8280ca4d0135958943e8472d2d7a5a94788e4391049ca7cb7386c1ecee3"} Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.293949 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whfhw" event={"ID":"ea80eb92-6881-4e69-8ca2-050d32254eb7","Type":"ContainerStarted","Data":"cf62dd867dfdc85f00ce0a625b67208af5a508841846bf5274a9dc74b568f567"} Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.301744 4870 generic.go:334] "Generic (PLEG): container finished" podID="b1839882-74e1-4c94-9d83-849d10c41089" containerID="cc14485a338ac264a7ea890f0b0accaa3f3a33b6c65407cec7c7b0303baf5081" exitCode=0 Jan 30 08:15:08 crc kubenswrapper[4870]: I0130 08:15:08.301827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glxrr" event={"ID":"b1839882-74e1-4c94-9d83-849d10c41089","Type":"ContainerDied","Data":"cc14485a338ac264a7ea890f0b0accaa3f3a33b6c65407cec7c7b0303baf5081"} Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.311429 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7b3d065-5057-49c1-be84-7880d7d4d619" containerID="53bbdcc8bd93ea4af16456df8b4618db541529c682064431c80b1ace4a00e00a" exitCode=0 Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.311589 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqxgq" event={"ID":"d7b3d065-5057-49c1-be84-7880d7d4d619","Type":"ContainerDied","Data":"53bbdcc8bd93ea4af16456df8b4618db541529c682064431c80b1ace4a00e00a"} Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.317190 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whfhw" event={"ID":"ea80eb92-6881-4e69-8ca2-050d32254eb7","Type":"ContainerStarted","Data":"2b0c1f13038b197a88d12acd9fb107eaedab3767def340d7b9af7dcc855bef2a"} Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.320487 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glxrr" event={"ID":"b1839882-74e1-4c94-9d83-849d10c41089","Type":"ContainerStarted","Data":"c9149def346396f297e19fc2caa4ef54779bdeb0935c4d325d7957e05e13cbc7"} Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.325192 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dmqx" event={"ID":"71b77216-d7c7-4a69-8596-e64fd99129c6","Type":"ContainerStarted","Data":"64afd883eaf0caf5531d3c234b5983ac6112855435b1120c5aa207280d615f87"} Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.362708 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8dmqx" podStartSLOduration=2.935283677 podStartE2EDuration="5.362682577s" podCreationTimestamp="2026-01-30 08:15:04 +0000 UTC" firstStartedPulling="2026-01-30 08:15:06.259862597 +0000 UTC m=+344.955409746" lastFinishedPulling="2026-01-30 08:15:08.687261527 +0000 UTC m=+347.382808646" observedRunningTime="2026-01-30 08:15:09.356078861 +0000 UTC m=+348.051625970" watchObservedRunningTime="2026-01-30 08:15:09.362682577 +0000 UTC m=+348.058229676" Jan 30 08:15:09 crc kubenswrapper[4870]: I0130 08:15:09.395637 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glxrr" podStartSLOduration=2.725026579 podStartE2EDuration="5.395594767s" podCreationTimestamp="2026-01-30 08:15:04 +0000 UTC" firstStartedPulling="2026-01-30 08:15:06.255698486 +0000 UTC m=+344.951245625" lastFinishedPulling="2026-01-30 08:15:08.926266704 +0000 UTC m=+347.621813813" observedRunningTime="2026-01-30 08:15:09.392726288 +0000 UTC m=+348.088273397" watchObservedRunningTime="2026-01-30 08:15:09.395594767 +0000 UTC m=+348.091141916" Jan 30 08:15:10 crc kubenswrapper[4870]: I0130 08:15:10.333540 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea80eb92-6881-4e69-8ca2-050d32254eb7" containerID="2b0c1f13038b197a88d12acd9fb107eaedab3767def340d7b9af7dcc855bef2a" exitCode=0 Jan 30 08:15:10 crc kubenswrapper[4870]: I0130 08:15:10.333581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whfhw" event={"ID":"ea80eb92-6881-4e69-8ca2-050d32254eb7","Type":"ContainerDied","Data":"2b0c1f13038b197a88d12acd9fb107eaedab3767def340d7b9af7dcc855bef2a"} Jan 30 08:15:10 crc kubenswrapper[4870]: I0130 08:15:10.338191 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqxgq" event={"ID":"d7b3d065-5057-49c1-be84-7880d7d4d619","Type":"ContainerStarted","Data":"1acbed91c3bf635499b07298f9a6aadf605b78600fddd206109ebc4db66b0d62"} Jan 30 08:15:10 crc kubenswrapper[4870]: I0130 08:15:10.383211 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqxgq" podStartSLOduration=1.683953834 podStartE2EDuration="3.383188274s" podCreationTimestamp="2026-01-30 08:15:07 +0000 UTC" firstStartedPulling="2026-01-30 08:15:08.292133046 +0000 UTC m=+346.987680165" lastFinishedPulling="2026-01-30 08:15:09.991367476 +0000 UTC m=+348.686914605" observedRunningTime="2026-01-30 08:15:10.380328034 +0000 UTC m=+349.075875143" watchObservedRunningTime="2026-01-30 08:15:10.383188274 +0000 UTC m=+349.078735403" Jan 30 08:15:11 crc kubenswrapper[4870]: I0130 08:15:11.347591 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whfhw" event={"ID":"ea80eb92-6881-4e69-8ca2-050d32254eb7","Type":"ContainerStarted","Data":"7a73f19a2365c56a7085d2611b8ec6d2b0bc0e74fac6b45cb792e143380218f0"} Jan 30 08:15:11 crc kubenswrapper[4870]: I0130 08:15:11.372150 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-whfhw" podStartSLOduration=2.934882493 podStartE2EDuration="5.372122202s" podCreationTimestamp="2026-01-30 08:15:06 +0000 UTC" firstStartedPulling="2026-01-30 08:15:08.295990736 +0000 UTC m=+346.991537865" lastFinishedPulling="2026-01-30 08:15:10.733230475 +0000 UTC m=+349.428777574" observedRunningTime="2026-01-30 08:15:11.366634521 +0000 UTC m=+350.062181640" watchObservedRunningTime="2026-01-30 08:15:11.372122202 +0000 UTC m=+350.067669331" Jan 30 08:15:14 crc kubenswrapper[4870]: I0130 08:15:14.825785 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:14 crc kubenswrapper[4870]: I0130 08:15:14.826627 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:14 crc kubenswrapper[4870]: I0130 08:15:14.884265 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:15 crc kubenswrapper[4870]: I0130 08:15:15.040627 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:15 crc kubenswrapper[4870]: I0130 08:15:15.040744 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:15 crc kubenswrapper[4870]: I0130 08:15:15.442275 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glxrr" Jan 30 08:15:16 crc kubenswrapper[4870]: I0130 08:15:16.114349 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8dmqx" podUID="71b77216-d7c7-4a69-8596-e64fd99129c6" containerName="registry-server" probeResult="failure" output=< Jan 30 08:15:16 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:15:16 crc kubenswrapper[4870]: > Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.216130 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.216253 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.276623 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.406252 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.406328 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.436467 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-whfhw" Jan 30 08:15:17 crc kubenswrapper[4870]: I0130 08:15:17.454969 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:18 crc kubenswrapper[4870]: I0130 08:15:18.449017 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqxgq" Jan 30 08:15:25 crc kubenswrapper[4870]: I0130 08:15:25.109991 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:25 crc kubenswrapper[4870]: I0130 08:15:25.182378 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8dmqx" Jan 30 08:15:25 crc kubenswrapper[4870]: I0130 08:15:25.249658 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:15:25 crc kubenswrapper[4870]: I0130 08:15:25.249757 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.842760 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vhbz2"] Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.844993 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.864737 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vhbz2"] Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983087 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f3a2e4b-def0-466b-8e43-383345474a2d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983150 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f3a2e4b-def0-466b-8e43-383345474a2d-registry-certificates\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983176 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8m56\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-kube-api-access-n8m56\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983203 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983513 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-registry-tls\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983603 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f3a2e4b-def0-466b-8e43-383345474a2d-trusted-ca\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983689 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f3a2e4b-def0-466b-8e43-383345474a2d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:40 crc kubenswrapper[4870]: I0130 08:15:40.983912 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-bound-sa-token\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.025056 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085296 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f3a2e4b-def0-466b-8e43-383345474a2d-registry-certificates\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085358 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8m56\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-kube-api-access-n8m56\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085419 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-registry-tls\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085444 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f3a2e4b-def0-466b-8e43-383345474a2d-trusted-ca\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085471 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f3a2e4b-def0-466b-8e43-383345474a2d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085499 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-bound-sa-token\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.085526 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f3a2e4b-def0-466b-8e43-383345474a2d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.086040 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f3a2e4b-def0-466b-8e43-383345474a2d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.086742 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f3a2e4b-def0-466b-8e43-383345474a2d-registry-certificates\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.088021 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f3a2e4b-def0-466b-8e43-383345474a2d-trusted-ca\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.093160 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f3a2e4b-def0-466b-8e43-383345474a2d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.093327 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-registry-tls\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.104329 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-bound-sa-token\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.117489 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8m56\" (UniqueName: \"kubernetes.io/projected/5f3a2e4b-def0-466b-8e43-383345474a2d-kube-api-access-n8m56\") pod \"image-registry-66df7c8f76-vhbz2\" (UID: \"5f3a2e4b-def0-466b-8e43-383345474a2d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.165548 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.409038 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vhbz2"] Jan 30 08:15:41 crc kubenswrapper[4870]: I0130 08:15:41.542557 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" event={"ID":"5f3a2e4b-def0-466b-8e43-383345474a2d","Type":"ContainerStarted","Data":"a90d31a814b86f627a43e25379ef68624e267caa6fd5a7c13fe2c5eeb94ccddd"} Jan 30 08:15:42 crc kubenswrapper[4870]: I0130 08:15:42.551342 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" event={"ID":"5f3a2e4b-def0-466b-8e43-383345474a2d","Type":"ContainerStarted","Data":"16d064f1b3edb8b3bde9ec43d0bc02c68594fa70738a5c1838c6ae297e3c59a9"} Jan 30 08:15:42 crc kubenswrapper[4870]: I0130 08:15:42.551939 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:15:42 crc kubenswrapper[4870]: I0130 08:15:42.582786 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" podStartSLOduration=2.58275959 podStartE2EDuration="2.58275959s" podCreationTimestamp="2026-01-30 08:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:15:42.579846729 +0000 UTC m=+381.275393848" watchObservedRunningTime="2026-01-30 08:15:42.58275959 +0000 UTC m=+381.278306709" Jan 30 08:15:55 crc kubenswrapper[4870]: I0130 08:15:55.250243 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:15:55 crc kubenswrapper[4870]: I0130 08:15:55.251185 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:16:01 crc kubenswrapper[4870]: I0130 08:16:01.174987 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vhbz2" Jan 30 08:16:01 crc kubenswrapper[4870]: I0130 08:16:01.283137 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sfs65"] Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.249561 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.250433 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.250558 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.252030 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a02ea1737ed88e21ae8883a3a6a22392b0695152f4ace29771521a0445381b12"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.252186 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://a02ea1737ed88e21ae8883a3a6a22392b0695152f4ace29771521a0445381b12" gracePeriod=600 Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.832090 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="a02ea1737ed88e21ae8883a3a6a22392b0695152f4ace29771521a0445381b12" exitCode=0 Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.832152 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"a02ea1737ed88e21ae8883a3a6a22392b0695152f4ace29771521a0445381b12"} Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.832643 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"f6c12e6d68c222de0711d262165234642f23c035c994270d4c786852d266f7a2"} Jan 30 08:16:25 crc kubenswrapper[4870]: I0130 08:16:25.832678 4870 scope.go:117] "RemoveContainer" containerID="94b47e3ea4a8a9e203f1255aaef680cd59b881d40b2d681901860cf3608c5cc7" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.336143 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" podUID="406fb8be-c783-4ef8-8aae-5430b0226d17" containerName="registry" containerID="cri-o://5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79" gracePeriod=30 Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.716417 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794428 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/406fb8be-c783-4ef8-8aae-5430b0226d17-installation-pull-secrets\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794476 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-certificates\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794551 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-bound-sa-token\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794578 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-trusted-ca\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794601 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/406fb8be-c783-4ef8-8aae-5430b0226d17-ca-trust-extracted\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794635 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldpq\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-kube-api-access-jldpq\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.794665 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-tls\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.796200 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.796315 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"406fb8be-c783-4ef8-8aae-5430b0226d17\" (UID: \"406fb8be-c783-4ef8-8aae-5430b0226d17\") " Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.796346 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.796553 4870 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.796570 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/406fb8be-c783-4ef8-8aae-5430b0226d17-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.803267 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.803458 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406fb8be-c783-4ef8-8aae-5430b0226d17-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.804315 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-kube-api-access-jldpq" (OuterVolumeSpecName: "kube-api-access-jldpq") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "kube-api-access-jldpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.806257 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.808515 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.817303 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406fb8be-c783-4ef8-8aae-5430b0226d17-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "406fb8be-c783-4ef8-8aae-5430b0226d17" (UID: "406fb8be-c783-4ef8-8aae-5430b0226d17"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.850653 4870 generic.go:334] "Generic (PLEG): container finished" podID="406fb8be-c783-4ef8-8aae-5430b0226d17" containerID="5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79" exitCode=0 Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.850699 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" event={"ID":"406fb8be-c783-4ef8-8aae-5430b0226d17","Type":"ContainerDied","Data":"5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79"} Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.850727 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" event={"ID":"406fb8be-c783-4ef8-8aae-5430b0226d17","Type":"ContainerDied","Data":"47d84e04f9b3f93637b83fdd855c471e56293ba330cba3caf1369ea3f8340bb4"} Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.850746 4870 scope.go:117] "RemoveContainer" containerID="5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.850751 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sfs65" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.868412 4870 scope.go:117] "RemoveContainer" containerID="5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79" Jan 30 08:16:26 crc kubenswrapper[4870]: E0130 08:16:26.868834 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79\": container with ID starting with 5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79 not found: ID does not exist" containerID="5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.868866 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79"} err="failed to get container status \"5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79\": rpc error: code = NotFound desc = could not find container \"5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79\": container with ID starting with 5552b3f0c363ddea2c6329d4466cd116049d2d523ebab1e5ac1b44fd378bdb79 not found: ID does not exist" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.887660 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sfs65"] Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.892888 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sfs65"] Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.897850 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.897903 4870 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/406fb8be-c783-4ef8-8aae-5430b0226d17-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.897914 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jldpq\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-kube-api-access-jldpq\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.897927 4870 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/406fb8be-c783-4ef8-8aae-5430b0226d17-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: I0130 08:16:26.897940 4870 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/406fb8be-c783-4ef8-8aae-5430b0226d17-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 08:16:26 crc kubenswrapper[4870]: E0130 08:16:26.971806 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod406fb8be_c783_4ef8_8aae_5430b0226d17.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod406fb8be_c783_4ef8_8aae_5430b0226d17.slice/crio-47d84e04f9b3f93637b83fdd855c471e56293ba330cba3caf1369ea3f8340bb4\": RecentStats: unable to find data in memory cache]" Jan 30 08:16:28 crc kubenswrapper[4870]: I0130 08:16:28.091173 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406fb8be-c783-4ef8-8aae-5430b0226d17" path="/var/lib/kubelet/pods/406fb8be-c783-4ef8-8aae-5430b0226d17/volumes" Jan 30 08:18:25 crc kubenswrapper[4870]: I0130 08:18:25.249630 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:18:25 crc kubenswrapper[4870]: I0130 08:18:25.250912 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:18:55 crc kubenswrapper[4870]: I0130 08:18:55.249525 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:18:55 crc kubenswrapper[4870]: I0130 08:18:55.250224 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.002766 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ltz5g"] Jan 30 08:19:18 crc kubenswrapper[4870]: E0130 08:19:18.004258 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406fb8be-c783-4ef8-8aae-5430b0226d17" containerName="registry" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.004283 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="406fb8be-c783-4ef8-8aae-5430b0226d17" containerName="registry" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.004454 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="406fb8be-c783-4ef8-8aae-5430b0226d17" containerName="registry" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.005097 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.005611 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ltz5g" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.005772 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.008558 4870 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jvp7h" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.009045 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.009362 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.009493 4870 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nsgpw" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.019056 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.022784 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-n5xzk"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.023807 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.025420 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ltz5g"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.028053 4870 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6dtkb" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.057726 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-n5xzk"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.167591 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6sq4\" (UniqueName: \"kubernetes.io/projected/dfee5a53-cd5a-470f-9327-e614ff6e56b3-kube-api-access-l6sq4\") pod \"cert-manager-858654f9db-ltz5g\" (UID: \"dfee5a53-cd5a-470f-9327-e614ff6e56b3\") " pod="cert-manager/cert-manager-858654f9db-ltz5g" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.167693 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck5z8\" (UniqueName: \"kubernetes.io/projected/4e91c0f0-40df-495c-8758-892355565838-kube-api-access-ck5z8\") pod \"cert-manager-cainjector-cf98fcc89-2hzbl\" (UID: \"4e91c0f0-40df-495c-8758-892355565838\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.167934 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4mkr\" (UniqueName: \"kubernetes.io/projected/c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1-kube-api-access-d4mkr\") pod \"cert-manager-webhook-687f57d79b-n5xzk\" (UID: \"c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.269859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4mkr\" (UniqueName: \"kubernetes.io/projected/c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1-kube-api-access-d4mkr\") pod \"cert-manager-webhook-687f57d79b-n5xzk\" (UID: \"c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.269945 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6sq4\" (UniqueName: \"kubernetes.io/projected/dfee5a53-cd5a-470f-9327-e614ff6e56b3-kube-api-access-l6sq4\") pod \"cert-manager-858654f9db-ltz5g\" (UID: \"dfee5a53-cd5a-470f-9327-e614ff6e56b3\") " pod="cert-manager/cert-manager-858654f9db-ltz5g" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.269978 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck5z8\" (UniqueName: \"kubernetes.io/projected/4e91c0f0-40df-495c-8758-892355565838-kube-api-access-ck5z8\") pod \"cert-manager-cainjector-cf98fcc89-2hzbl\" (UID: \"4e91c0f0-40df-495c-8758-892355565838\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.293696 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6sq4\" (UniqueName: \"kubernetes.io/projected/dfee5a53-cd5a-470f-9327-e614ff6e56b3-kube-api-access-l6sq4\") pod \"cert-manager-858654f9db-ltz5g\" (UID: \"dfee5a53-cd5a-470f-9327-e614ff6e56b3\") " pod="cert-manager/cert-manager-858654f9db-ltz5g" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.294076 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4mkr\" (UniqueName: \"kubernetes.io/projected/c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1-kube-api-access-d4mkr\") pod \"cert-manager-webhook-687f57d79b-n5xzk\" (UID: \"c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.295546 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck5z8\" (UniqueName: \"kubernetes.io/projected/4e91c0f0-40df-495c-8758-892355565838-kube-api-access-ck5z8\") pod \"cert-manager-cainjector-cf98fcc89-2hzbl\" (UID: \"4e91c0f0-40df-495c-8758-892355565838\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.337104 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ltz5g" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.345864 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.357848 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.622834 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.646893 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.682077 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ltz5g"] Jan 30 08:19:18 crc kubenswrapper[4870]: I0130 08:19:18.728054 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-n5xzk"] Jan 30 08:19:19 crc kubenswrapper[4870]: I0130 08:19:19.114002 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" event={"ID":"c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1","Type":"ContainerStarted","Data":"fe888275e7e4c12155b17ef62baef34fceb7a734978a3d054f94765f44fdead3"} Jan 30 08:19:19 crc kubenswrapper[4870]: I0130 08:19:19.116315 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" event={"ID":"4e91c0f0-40df-495c-8758-892355565838","Type":"ContainerStarted","Data":"4224cf87c60b6e8d522910b1a823312155a71a4ec56669d68ff585c5e3020415"} Jan 30 08:19:19 crc kubenswrapper[4870]: I0130 08:19:19.119934 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ltz5g" event={"ID":"dfee5a53-cd5a-470f-9327-e614ff6e56b3","Type":"ContainerStarted","Data":"d85eb2a5e5b6087011864a97717e70327f7890274e9a6b4d024911d6b14fd2a2"} Jan 30 08:19:25 crc kubenswrapper[4870]: I0130 08:19:25.249567 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:19:25 crc kubenswrapper[4870]: I0130 08:19:25.250298 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:19:25 crc kubenswrapper[4870]: I0130 08:19:25.250369 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:19:25 crc kubenswrapper[4870]: I0130 08:19:25.251160 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6c12e6d68c222de0711d262165234642f23c035c994270d4c786852d266f7a2"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:19:25 crc kubenswrapper[4870]: I0130 08:19:25.251250 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://f6c12e6d68c222de0711d262165234642f23c035c994270d4c786852d266f7a2" gracePeriod=600 Jan 30 08:19:26 crc kubenswrapper[4870]: I0130 08:19:26.182464 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" event={"ID":"4e91c0f0-40df-495c-8758-892355565838","Type":"ContainerStarted","Data":"b30160def04b3532801e1abfaad294e0d342a8fb215be5963b2b08f7a4506818"} Jan 30 08:19:26 crc kubenswrapper[4870]: I0130 08:19:26.188536 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="f6c12e6d68c222de0711d262165234642f23c035c994270d4c786852d266f7a2" exitCode=0 Jan 30 08:19:26 crc kubenswrapper[4870]: I0130 08:19:26.188614 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"f6c12e6d68c222de0711d262165234642f23c035c994270d4c786852d266f7a2"} Jan 30 08:19:26 crc kubenswrapper[4870]: I0130 08:19:26.188669 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"8f05305445b605660ea999aab22b621a1da0c30929b1af3251f46624decd30be"} Jan 30 08:19:26 crc kubenswrapper[4870]: I0130 08:19:26.188704 4870 scope.go:117] "RemoveContainer" containerID="a02ea1737ed88e21ae8883a3a6a22392b0695152f4ace29771521a0445381b12" Jan 30 08:19:26 crc kubenswrapper[4870]: I0130 08:19:26.239446 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2hzbl" podStartSLOduration=2.576752582 podStartE2EDuration="9.2394237s" podCreationTimestamp="2026-01-30 08:19:17 +0000 UTC" firstStartedPulling="2026-01-30 08:19:18.646647213 +0000 UTC m=+597.342194322" lastFinishedPulling="2026-01-30 08:19:25.309318331 +0000 UTC m=+604.004865440" observedRunningTime="2026-01-30 08:19:26.204098849 +0000 UTC m=+604.899645978" watchObservedRunningTime="2026-01-30 08:19:26.2394237 +0000 UTC m=+604.934970809" Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.678063 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj5db"] Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679086 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-controller" containerID="cri-o://575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679157 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="sbdb" containerID="cri-o://8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679195 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="northd" containerID="cri-o://a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679304 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679293 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-node" containerID="cri-o://c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679350 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-acl-logging" containerID="cri-o://0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.679294 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="nbdb" containerID="cri-o://0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165" gracePeriod=30 Jan 30 08:19:27 crc kubenswrapper[4870]: I0130 08:19:27.756753 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" containerID="cri-o://b741960d899fead07c73e8ea4b750a10bd019b223fe9d09e7a67a573f3e4bee3" gracePeriod=30 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.202040 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovnkube-controller/3.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.204971 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovn-acl-logging/0.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.205661 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovn-controller/0.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206056 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="b741960d899fead07c73e8ea4b750a10bd019b223fe9d09e7a67a573f3e4bee3" exitCode=0 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206084 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959" exitCode=0 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206093 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165" exitCode=0 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206102 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c" exitCode=0 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206112 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499" exitCode=0 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206121 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad" exitCode=0 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206129 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71" exitCode=143 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206137 4870 generic.go:334] "Generic (PLEG): container finished" podID="36037609-52f9-4c09-8beb-6d35a039347b" containerID="575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7" exitCode=143 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206204 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"b741960d899fead07c73e8ea4b750a10bd019b223fe9d09e7a67a573f3e4bee3"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206283 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206314 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206354 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206378 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206397 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206403 4870 scope.go:117] "RemoveContainer" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.206415 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.208234 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/2.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.208689 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/1.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.208728 4870 generic.go:334] "Generic (PLEG): container finished" podID="3e8e9e25-2b9b-4820-8282-48e1d930a721" containerID="61538cdbec39ead4232db7d69f7b41605b3dfdb222395b4c93251e6ded8b3e41" exitCode=2 Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.208781 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerDied","Data":"61538cdbec39ead4232db7d69f7b41605b3dfdb222395b4c93251e6ded8b3e41"} Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.209380 4870 scope.go:117] "RemoveContainer" containerID="61538cdbec39ead4232db7d69f7b41605b3dfdb222395b4c93251e6ded8b3e41" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.209567 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hsmrb_openshift-multus(3e8e9e25-2b9b-4820-8282-48e1d930a721)\"" pod="openshift-multus/multus-hsmrb" podUID="3e8e9e25-2b9b-4820-8282-48e1d930a721" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.284298 4870 scope.go:117] "RemoveContainer" containerID="e3a6e35394d201b4791302c67100d562af8287400f9c84b312e22704a65348d6" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.284431 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258\": container with ID starting with 1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258 not found: ID does not exist" containerID="1cfc68484a864e957ca34c89143ab2d216ee6d034da9aa1502a74f8358dc2258" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.288742 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovn-acl-logging/0.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.289359 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovn-controller/0.log" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.289860 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335050 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-var-lib-openvswitch\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335142 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335170 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-netns\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335200 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-ovn\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335207 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335260 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36037609-52f9-4c09-8beb-6d35a039347b-ovn-node-metrics-cert\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335296 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-bin\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335292 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335318 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk5ps\" (UniqueName: \"kubernetes.io/projected/36037609-52f9-4c09-8beb-6d35a039347b-kube-api-access-pk5ps\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335342 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-systemd-units\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335345 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335362 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-systemd\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335387 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-script-lib\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335387 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335404 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-config\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335427 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-node-log\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335425 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335464 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-netd\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335481 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-slash\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335506 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-env-overrides\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335534 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-log-socket\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335569 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-etc-openvswitch\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335592 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-openvswitch\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335612 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-ovn-kubernetes\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.335654 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-kubelet\") pod \"36037609-52f9-4c09-8beb-6d35a039347b\" (UID: \"36037609-52f9-4c09-8beb-6d35a039347b\") " Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336115 4870 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336133 4870 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336145 4870 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336178 4870 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336195 4870 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336319 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336922 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.336984 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.337112 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-log-socket" (OuterVolumeSpecName: "log-socket") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.337207 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-slash" (OuterVolumeSpecName: "host-slash") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.337847 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.338059 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.338103 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.338722 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.338778 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-node-log" (OuterVolumeSpecName: "node-log") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.338818 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.339625 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.343603 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36037609-52f9-4c09-8beb-6d35a039347b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.344404 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36037609-52f9-4c09-8beb-6d35a039347b-kube-api-access-pk5ps" (OuterVolumeSpecName: "kube-api-access-pk5ps") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "kube-api-access-pk5ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.362441 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "36037609-52f9-4c09-8beb-6d35a039347b" (UID: "36037609-52f9-4c09-8beb-6d35a039347b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.365542 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nc7ds"] Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.365998 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366029 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366049 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366065 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366210 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-node" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366228 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-node" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366242 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-acl-logging" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366254 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-acl-logging" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366271 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="northd" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366288 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="northd" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366313 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="nbdb" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366329 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="nbdb" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366420 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="sbdb" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366442 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="sbdb" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366464 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366477 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366507 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366520 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366546 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kubecfg-setup" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366560 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kubecfg-setup" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.366580 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366593 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366779 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366807 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366825 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="northd" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366841 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-acl-logging" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366854 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="kube-rbac-proxy-node" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366869 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366911 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="nbdb" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366925 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366946 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366963 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="sbdb" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.366979 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovn-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.367147 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.367161 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: E0130 08:19:28.367188 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.367202 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.367394 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="36037609-52f9-4c09-8beb-6d35a039347b" containerName="ovnkube-controller" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.379796 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.437800 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-systemd\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.437906 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-node-log\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.437953 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-kubelet\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.437996 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438035 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-etc-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438073 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-env-overrides\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438108 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659lx\" (UniqueName: \"kubernetes.io/projected/44b4d931-dba3-441a-aa46-ab54a5a6603d-kube-api-access-659lx\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438146 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-cni-netd\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438181 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-slash\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438216 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-ovn\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438261 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-log-socket\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438319 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-var-lib-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438358 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovn-node-metrics-cert\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438398 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-cni-bin\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438432 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovnkube-script-lib\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438627 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovnkube-config\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438733 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-run-netns\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438766 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-systemd-units\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438797 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438825 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.438992 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36037609-52f9-4c09-8beb-6d35a039347b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439020 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk5ps\" (UniqueName: \"kubernetes.io/projected/36037609-52f9-4c09-8beb-6d35a039347b-kube-api-access-pk5ps\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439031 4870 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439044 4870 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439054 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439063 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439075 4870 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439085 4870 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439093 4870 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439104 4870 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36037609-52f9-4c09-8beb-6d35a039347b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439114 4870 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439127 4870 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439138 4870 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439148 4870 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.439160 4870 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36037609-52f9-4c09-8beb-6d35a039347b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.540900 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.540971 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541038 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-systemd\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541071 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-node-log\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541106 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-kubelet\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541127 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541203 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541196 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541264 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-kubelet\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541139 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541406 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-etc-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541234 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-node-log\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541456 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-env-overrides\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541493 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659lx\" (UniqueName: \"kubernetes.io/projected/44b4d931-dba3-441a-aa46-ab54a5a6603d-kube-api-access-659lx\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541534 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-cni-netd\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541569 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-slash\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-ovn\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541517 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-etc-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541670 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-cni-netd\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541676 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-log-socket\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541737 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-slash\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-ovn\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541720 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-log-socket\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541835 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-run-systemd\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.541946 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-var-lib-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542011 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovn-node-metrics-cert\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542025 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-env-overrides\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542070 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-cni-bin\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542086 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-var-lib-openvswitch\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542122 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovnkube-script-lib\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542155 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-cni-bin\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542259 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovnkube-config\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-run-netns\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542368 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-systemd-units\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542390 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-host-run-netns\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542495 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b4d931-dba3-441a-aa46-ab54a5a6603d-systemd-units\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.542823 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovnkube-config\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.543390 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovnkube-script-lib\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.547307 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b4d931-dba3-441a-aa46-ab54a5a6603d-ovn-node-metrics-cert\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.560003 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659lx\" (UniqueName: \"kubernetes.io/projected/44b4d931-dba3-441a-aa46-ab54a5a6603d-kube-api-access-659lx\") pod \"ovnkube-node-nc7ds\" (UID: \"44b4d931-dba3-441a-aa46-ab54a5a6603d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: I0130 08:19:28.716109 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:28 crc kubenswrapper[4870]: W0130 08:19:28.738553 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44b4d931_dba3_441a_aa46_ab54a5a6603d.slice/crio-c764fac3319c45e8fab3bd66ae124d5162255e7389a855b96561e7b8c502f5a1 WatchSource:0}: Error finding container c764fac3319c45e8fab3bd66ae124d5162255e7389a855b96561e7b8c502f5a1: Status 404 returned error can't find the container with id c764fac3319c45e8fab3bd66ae124d5162255e7389a855b96561e7b8c502f5a1 Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.221346 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ltz5g" event={"ID":"dfee5a53-cd5a-470f-9327-e614ff6e56b3","Type":"ContainerStarted","Data":"28f19356301fe81e1c9f648ceed5619c54a4f33fddb3408b906af0ce16fe258b"} Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.223676 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/2.log" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.225994 4870 generic.go:334] "Generic (PLEG): container finished" podID="44b4d931-dba3-441a-aa46-ab54a5a6603d" containerID="f15a4ec572846e0d230cb2f311da4392186ca82166c81544edb20931b290fd3c" exitCode=0 Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.226060 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerDied","Data":"f15a4ec572846e0d230cb2f311da4392186ca82166c81544edb20931b290fd3c"} Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.226098 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"c764fac3319c45e8fab3bd66ae124d5162255e7389a855b96561e7b8c502f5a1"} Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.228258 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" event={"ID":"c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1","Type":"ContainerStarted","Data":"c1ef29374d5e8e38bdd15ff8b4858697608417fcb9b52c53252656fdfeee7e74"} Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.228349 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.237507 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovn-acl-logging/0.log" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.238734 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj5db_36037609-52f9-4c09-8beb-6d35a039347b/ovn-controller/0.log" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.239759 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" event={"ID":"36037609-52f9-4c09-8beb-6d35a039347b","Type":"ContainerDied","Data":"cb12165531731a176212e5ceb871fbc54aec2538e3ad27d93d5c0438cf177aa7"} Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.239888 4870 scope.go:117] "RemoveContainer" containerID="b741960d899fead07c73e8ea4b750a10bd019b223fe9d09e7a67a573f3e4bee3" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.240598 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj5db" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.251309 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ltz5g" podStartSLOduration=2.703967012 podStartE2EDuration="12.251274521s" podCreationTimestamp="2026-01-30 08:19:17 +0000 UTC" firstStartedPulling="2026-01-30 08:19:18.701471766 +0000 UTC m=+597.397018875" lastFinishedPulling="2026-01-30 08:19:28.248779235 +0000 UTC m=+606.944326384" observedRunningTime="2026-01-30 08:19:29.250444064 +0000 UTC m=+607.945991213" watchObservedRunningTime="2026-01-30 08:19:29.251274521 +0000 UTC m=+607.946821680" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.282416 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" podStartSLOduration=1.760761677 podStartE2EDuration="11.282392789s" podCreationTimestamp="2026-01-30 08:19:18 +0000 UTC" firstStartedPulling="2026-01-30 08:19:18.736012902 +0000 UTC m=+597.431560011" lastFinishedPulling="2026-01-30 08:19:28.257643974 +0000 UTC m=+606.953191123" observedRunningTime="2026-01-30 08:19:29.282081849 +0000 UTC m=+607.977628978" watchObservedRunningTime="2026-01-30 08:19:29.282392789 +0000 UTC m=+607.977939898" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.291236 4870 scope.go:117] "RemoveContainer" containerID="8b69c6f4703f6562a40a39ae67345d13bdd12f5ad857b2d82c688045680ec959" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.345123 4870 scope.go:117] "RemoveContainer" containerID="0d4f57f11509c53c37fac0cd78b5e66a0fe1f0cf9d9e871d47a8e2d1d8b5c165" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.370577 4870 scope.go:117] "RemoveContainer" containerID="a1167f51b3289cd09b28e12e21b8d070ec42207704552e03669e4165dffcb03c" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.391057 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj5db"] Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.396078 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj5db"] Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.398095 4870 scope.go:117] "RemoveContainer" containerID="1df1ddb13d52250ced37d12ff2dc768bf24b77be6e6683427491bb47fa499499" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.417640 4870 scope.go:117] "RemoveContainer" containerID="c504ee48b14b937f75d9b9d9947b033d5a20e8d4d53e0f7fd5eaa22ed33cb1ad" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.446415 4870 scope.go:117] "RemoveContainer" containerID="0f6e737c741e80dcd81c80f594e37faa40894808a16895b315bd4862668e7b71" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.469329 4870 scope.go:117] "RemoveContainer" containerID="575202e2c2be0f5e4a7449bfc2f2ef700313987a24a19bc751655bb2c2d118e7" Jan 30 08:19:29 crc kubenswrapper[4870]: I0130 08:19:29.505590 4870 scope.go:117] "RemoveContainer" containerID="daefd50679d10beed7fe7a9014c1cb34525b08c2c5b9cf04311684497572745f" Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.084735 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36037609-52f9-4c09-8beb-6d35a039347b" path="/var/lib/kubelet/pods/36037609-52f9-4c09-8beb-6d35a039347b/volumes" Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.250460 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"d68bc46fcb018da0aa44c01d649785d6ad2936d440430360e0bfb028bbbae3b0"} Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.250516 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"c2cd7e88e5ea05c8fb49e55098077a9c4e91f0fa015d1a824f4c3f93cd1bedc1"} Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.250539 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"89b9f7514af43afbb2865d7ed723d8d3ff70dd47c3039800c9fd873f0870452e"} Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.250553 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"ad686efbb85ea3f7d8a9527f5f9cf65b6c804dacaea14aac6fe94243e7da8c6e"} Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.250568 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"b4477771c9b4eb0f7e0362a33eb22c363d86df4400d3316a77e98a452ae9c217"} Jan 30 08:19:30 crc kubenswrapper[4870]: I0130 08:19:30.250581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"d0d810de670c87161bc5f7c00601ff9ac02f8b3cd7857ca221772943c4c4b1bb"} Jan 30 08:19:33 crc kubenswrapper[4870]: I0130 08:19:33.279733 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"14fe3209a9ecb2c269531a1fb740f662f5c47a802f08342dabcfea6702039171"} Jan 30 08:19:33 crc kubenswrapper[4870]: I0130 08:19:33.362724 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-n5xzk" Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.299209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" event={"ID":"44b4d931-dba3-441a-aa46-ab54a5a6603d","Type":"ContainerStarted","Data":"066b230d174830f8f95f81ff2297f806308b4d299c2724a83e328a4721879fa2"} Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.299726 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.299775 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.299789 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.330202 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" podStartSLOduration=7.330182947 podStartE2EDuration="7.330182947s" podCreationTimestamp="2026-01-30 08:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:19:35.325488739 +0000 UTC m=+614.021035868" watchObservedRunningTime="2026-01-30 08:19:35.330182947 +0000 UTC m=+614.025730056" Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.333138 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:35 crc kubenswrapper[4870]: I0130 08:19:35.333647 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:19:39 crc kubenswrapper[4870]: I0130 08:19:39.075144 4870 scope.go:117] "RemoveContainer" containerID="61538cdbec39ead4232db7d69f7b41605b3dfdb222395b4c93251e6ded8b3e41" Jan 30 08:19:39 crc kubenswrapper[4870]: E0130 08:19:39.076301 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hsmrb_openshift-multus(3e8e9e25-2b9b-4820-8282-48e1d930a721)\"" pod="openshift-multus/multus-hsmrb" podUID="3e8e9e25-2b9b-4820-8282-48e1d930a721" Jan 30 08:19:50 crc kubenswrapper[4870]: I0130 08:19:50.074685 4870 scope.go:117] "RemoveContainer" containerID="61538cdbec39ead4232db7d69f7b41605b3dfdb222395b4c93251e6ded8b3e41" Jan 30 08:19:50 crc kubenswrapper[4870]: I0130 08:19:50.417138 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsmrb_3e8e9e25-2b9b-4820-8282-48e1d930a721/kube-multus/2.log" Jan 30 08:19:50 crc kubenswrapper[4870]: I0130 08:19:50.417537 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsmrb" event={"ID":"3e8e9e25-2b9b-4820-8282-48e1d930a721","Type":"ContainerStarted","Data":"a5c2960bb0e1aa6565d35611f4225e3dc7b6fdd6bce853738ed6f884200ad264"} Jan 30 08:19:58 crc kubenswrapper[4870]: I0130 08:19:58.747941 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc7ds" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.582144 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m"] Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.584197 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.587340 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.604081 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m"] Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.755396 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.755481 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htwp\" (UniqueName: \"kubernetes.io/projected/69895f16-2797-4fd7-aedf-54fc47cd2c4f-kube-api-access-5htwp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.755559 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.856395 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.856469 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htwp\" (UniqueName: \"kubernetes.io/projected/69895f16-2797-4fd7-aedf-54fc47cd2c4f-kube-api-access-5htwp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.856523 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.857053 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.857233 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.889712 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htwp\" (UniqueName: \"kubernetes.io/projected/69895f16-2797-4fd7-aedf-54fc47cd2c4f-kube-api-access-5htwp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:08 crc kubenswrapper[4870]: I0130 08:20:08.902741 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:09 crc kubenswrapper[4870]: I0130 08:20:09.207497 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m"] Jan 30 08:20:09 crc kubenswrapper[4870]: I0130 08:20:09.541630 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" event={"ID":"69895f16-2797-4fd7-aedf-54fc47cd2c4f","Type":"ContainerStarted","Data":"4a3e158ce6ac7c5a2feaf02f10ff033257f4c33e39f21539dcbaff9607aa0dfd"} Jan 30 08:20:09 crc kubenswrapper[4870]: I0130 08:20:09.541706 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" event={"ID":"69895f16-2797-4fd7-aedf-54fc47cd2c4f","Type":"ContainerStarted","Data":"75e1968f6545046cae56f05b6e9eb3d512ef14cce74ed0927ac9f89dd3e78524"} Jan 30 08:20:10 crc kubenswrapper[4870]: I0130 08:20:10.553252 4870 generic.go:334] "Generic (PLEG): container finished" podID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerID="4a3e158ce6ac7c5a2feaf02f10ff033257f4c33e39f21539dcbaff9607aa0dfd" exitCode=0 Jan 30 08:20:10 crc kubenswrapper[4870]: I0130 08:20:10.553301 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" event={"ID":"69895f16-2797-4fd7-aedf-54fc47cd2c4f","Type":"ContainerDied","Data":"4a3e158ce6ac7c5a2feaf02f10ff033257f4c33e39f21539dcbaff9607aa0dfd"} Jan 30 08:20:12 crc kubenswrapper[4870]: I0130 08:20:12.570310 4870 generic.go:334] "Generic (PLEG): container finished" podID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerID="9feacfa1b0a9c420e41b6a6c567f4b1e361ca798b3e991ab07989f5ced3f5d37" exitCode=0 Jan 30 08:20:12 crc kubenswrapper[4870]: I0130 08:20:12.570392 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" event={"ID":"69895f16-2797-4fd7-aedf-54fc47cd2c4f","Type":"ContainerDied","Data":"9feacfa1b0a9c420e41b6a6c567f4b1e361ca798b3e991ab07989f5ced3f5d37"} Jan 30 08:20:13 crc kubenswrapper[4870]: I0130 08:20:13.582724 4870 generic.go:334] "Generic (PLEG): container finished" podID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerID="11e579aceb4c6af641442fbe030cf62d99a32ed13743bd76897395330bfbe6c5" exitCode=0 Jan 30 08:20:13 crc kubenswrapper[4870]: I0130 08:20:13.582837 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" event={"ID":"69895f16-2797-4fd7-aedf-54fc47cd2c4f","Type":"ContainerDied","Data":"11e579aceb4c6af641442fbe030cf62d99a32ed13743bd76897395330bfbe6c5"} Jan 30 08:20:14 crc kubenswrapper[4870]: I0130 08:20:14.887552 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:14 crc kubenswrapper[4870]: I0130 08:20:14.942049 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-bundle\") pod \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " Jan 30 08:20:14 crc kubenswrapper[4870]: I0130 08:20:14.947007 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-bundle" (OuterVolumeSpecName: "bundle") pod "69895f16-2797-4fd7-aedf-54fc47cd2c4f" (UID: "69895f16-2797-4fd7-aedf-54fc47cd2c4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.043182 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-util\") pod \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.043320 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5htwp\" (UniqueName: \"kubernetes.io/projected/69895f16-2797-4fd7-aedf-54fc47cd2c4f-kube-api-access-5htwp\") pod \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\" (UID: \"69895f16-2797-4fd7-aedf-54fc47cd2c4f\") " Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.043778 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.052329 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69895f16-2797-4fd7-aedf-54fc47cd2c4f-kube-api-access-5htwp" (OuterVolumeSpecName: "kube-api-access-5htwp") pod "69895f16-2797-4fd7-aedf-54fc47cd2c4f" (UID: "69895f16-2797-4fd7-aedf-54fc47cd2c4f"). InnerVolumeSpecName "kube-api-access-5htwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.129811 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-util" (OuterVolumeSpecName: "util") pod "69895f16-2797-4fd7-aedf-54fc47cd2c4f" (UID: "69895f16-2797-4fd7-aedf-54fc47cd2c4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.146007 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/69895f16-2797-4fd7-aedf-54fc47cd2c4f-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.146042 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5htwp\" (UniqueName: \"kubernetes.io/projected/69895f16-2797-4fd7-aedf-54fc47cd2c4f-kube-api-access-5htwp\") on node \"crc\" DevicePath \"\"" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.602943 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" event={"ID":"69895f16-2797-4fd7-aedf-54fc47cd2c4f","Type":"ContainerDied","Data":"75e1968f6545046cae56f05b6e9eb3d512ef14cce74ed0927ac9f89dd3e78524"} Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.603027 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e1968f6545046cae56f05b6e9eb3d512ef14cce74ed0927ac9f89dd3e78524" Jan 30 08:20:15 crc kubenswrapper[4870]: I0130 08:20:15.603072 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.673823 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn"] Jan 30 08:20:26 crc kubenswrapper[4870]: E0130 08:20:26.674923 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="util" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.674942 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="util" Jan 30 08:20:26 crc kubenswrapper[4870]: E0130 08:20:26.674958 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="pull" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.674968 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="pull" Jan 30 08:20:26 crc kubenswrapper[4870]: E0130 08:20:26.674985 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="extract" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.674994 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="extract" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.675148 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="69895f16-2797-4fd7-aedf-54fc47cd2c4f" containerName="extract" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.675693 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.678679 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-lnpzq" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.678857 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.678970 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.685847 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn"] Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.739857 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579ms\" (UniqueName: \"kubernetes.io/projected/614f63fc-ed66-41bb-b9fe-4229b3b67f50-kube-api-access-579ms\") pod \"obo-prometheus-operator-68bc856cb9-hj2pn\" (UID: \"614f63fc-ed66-41bb-b9fe-4229b3b67f50\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.806179 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8"] Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.807056 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.809661 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.809930 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-b5jzw" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.810708 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf"] Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.811818 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.833137 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8"] Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.836464 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf"] Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.845053 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/586011b7-bc23-4a41-8795-bc28910cd170-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-9clj8\" (UID: \"586011b7-bc23-4a41-8795-bc28910cd170\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.845324 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b8b459d-7a00-4e96-8916-4edd9fc87b99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf\" (UID: \"1b8b459d-7a00-4e96-8916-4edd9fc87b99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.845439 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579ms\" (UniqueName: \"kubernetes.io/projected/614f63fc-ed66-41bb-b9fe-4229b3b67f50-kube-api-access-579ms\") pod \"obo-prometheus-operator-68bc856cb9-hj2pn\" (UID: \"614f63fc-ed66-41bb-b9fe-4229b3b67f50\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.845509 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/586011b7-bc23-4a41-8795-bc28910cd170-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-9clj8\" (UID: \"586011b7-bc23-4a41-8795-bc28910cd170\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.845578 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b8b459d-7a00-4e96-8916-4edd9fc87b99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf\" (UID: \"1b8b459d-7a00-4e96-8916-4edd9fc87b99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.894039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579ms\" (UniqueName: \"kubernetes.io/projected/614f63fc-ed66-41bb-b9fe-4229b3b67f50-kube-api-access-579ms\") pod \"obo-prometheus-operator-68bc856cb9-hj2pn\" (UID: \"614f63fc-ed66-41bb-b9fe-4229b3b67f50\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.947610 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/586011b7-bc23-4a41-8795-bc28910cd170-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-9clj8\" (UID: \"586011b7-bc23-4a41-8795-bc28910cd170\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.948162 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b8b459d-7a00-4e96-8916-4edd9fc87b99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf\" (UID: \"1b8b459d-7a00-4e96-8916-4edd9fc87b99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.948478 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/586011b7-bc23-4a41-8795-bc28910cd170-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-9clj8\" (UID: \"586011b7-bc23-4a41-8795-bc28910cd170\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.948697 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b8b459d-7a00-4e96-8916-4edd9fc87b99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf\" (UID: \"1b8b459d-7a00-4e96-8916-4edd9fc87b99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.953573 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b8b459d-7a00-4e96-8916-4edd9fc87b99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf\" (UID: \"1b8b459d-7a00-4e96-8916-4edd9fc87b99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.953677 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b8b459d-7a00-4e96-8916-4edd9fc87b99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf\" (UID: \"1b8b459d-7a00-4e96-8916-4edd9fc87b99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.958489 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/586011b7-bc23-4a41-8795-bc28910cd170-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-9clj8\" (UID: \"586011b7-bc23-4a41-8795-bc28910cd170\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.959951 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/586011b7-bc23-4a41-8795-bc28910cd170-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf5558b74-9clj8\" (UID: \"586011b7-bc23-4a41-8795-bc28910cd170\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:26 crc kubenswrapper[4870]: I0130 08:20:26.991033 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.042011 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lv4dk"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.043088 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.050896 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gnlcv" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.050961 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.051945 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqct\" (UniqueName: \"kubernetes.io/projected/0f7d84eb-b450-4168-b207-22520fed3fd3-kube-api-access-7gqct\") pod \"observability-operator-59bdc8b94-lv4dk\" (UID: \"0f7d84eb-b450-4168-b207-22520fed3fd3\") " pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.052058 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f7d84eb-b450-4168-b207-22520fed3fd3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lv4dk\" (UID: \"0f7d84eb-b450-4168-b207-22520fed3fd3\") " pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.056380 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lv4dk"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.131421 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.131434 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.155631 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f7d84eb-b450-4168-b207-22520fed3fd3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lv4dk\" (UID: \"0f7d84eb-b450-4168-b207-22520fed3fd3\") " pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.155756 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqct\" (UniqueName: \"kubernetes.io/projected/0f7d84eb-b450-4168-b207-22520fed3fd3-kube-api-access-7gqct\") pod \"observability-operator-59bdc8b94-lv4dk\" (UID: \"0f7d84eb-b450-4168-b207-22520fed3fd3\") " pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.163742 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f7d84eb-b450-4168-b207-22520fed3fd3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lv4dk\" (UID: \"0f7d84eb-b450-4168-b207-22520fed3fd3\") " pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.173828 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqct\" (UniqueName: \"kubernetes.io/projected/0f7d84eb-b450-4168-b207-22520fed3fd3-kube-api-access-7gqct\") pod \"observability-operator-59bdc8b94-lv4dk\" (UID: \"0f7d84eb-b450-4168-b207-22520fed3fd3\") " pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.220628 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tmzq2"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.221318 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.228801 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-vndv4" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.240536 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tmzq2"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.254980 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.259298 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/962cb597-f461-4983-b37a-a4c9e545f7d8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tmzq2\" (UID: \"962cb597-f461-4983-b37a-a4c9e545f7d8\") " pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.259409 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sjs2\" (UniqueName: \"kubernetes.io/projected/962cb597-f461-4983-b37a-a4c9e545f7d8-kube-api-access-8sjs2\") pod \"perses-operator-5bf474d74f-tmzq2\" (UID: \"962cb597-f461-4983-b37a-a4c9e545f7d8\") " pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.363135 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sjs2\" (UniqueName: \"kubernetes.io/projected/962cb597-f461-4983-b37a-a4c9e545f7d8-kube-api-access-8sjs2\") pod \"perses-operator-5bf474d74f-tmzq2\" (UID: \"962cb597-f461-4983-b37a-a4c9e545f7d8\") " pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.363197 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/962cb597-f461-4983-b37a-a4c9e545f7d8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tmzq2\" (UID: \"962cb597-f461-4983-b37a-a4c9e545f7d8\") " pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.364276 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/962cb597-f461-4983-b37a-a4c9e545f7d8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tmzq2\" (UID: \"962cb597-f461-4983-b37a-a4c9e545f7d8\") " pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.367622 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.398775 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sjs2\" (UniqueName: \"kubernetes.io/projected/962cb597-f461-4983-b37a-a4c9e545f7d8-kube-api-access-8sjs2\") pod \"perses-operator-5bf474d74f-tmzq2\" (UID: \"962cb597-f461-4983-b37a-a4c9e545f7d8\") " pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.476508 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.510034 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.569447 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.687085 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" event={"ID":"586011b7-bc23-4a41-8795-bc28910cd170","Type":"ContainerStarted","Data":"0718e3efba7ab00d61089aa48f0234f3c19944124782bf6dc48db1ceb5ea4dbe"} Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.712087 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" event={"ID":"614f63fc-ed66-41bb-b9fe-4229b3b67f50","Type":"ContainerStarted","Data":"39d61e2b9f773d01c9ac965557ad0282920802eea08954910d03acb92bfd9278"} Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.725183 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" event={"ID":"1b8b459d-7a00-4e96-8916-4edd9fc87b99","Type":"ContainerStarted","Data":"f6d600f3277865fd9bfbffd570e8b09f94ff53d2fa9a50a6306e8c32eb833e37"} Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.817272 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tmzq2"] Jan 30 08:20:27 crc kubenswrapper[4870]: I0130 08:20:27.885192 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lv4dk"] Jan 30 08:20:28 crc kubenswrapper[4870]: I0130 08:20:28.733729 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" event={"ID":"962cb597-f461-4983-b37a-a4c9e545f7d8","Type":"ContainerStarted","Data":"2dda537d4e989c9630a8cd6927262ab32ecf60c780355e155a6feb5801476d8b"} Jan 30 08:20:28 crc kubenswrapper[4870]: I0130 08:20:28.743068 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" event={"ID":"0f7d84eb-b450-4168-b207-22520fed3fd3","Type":"ContainerStarted","Data":"22f5371384e5321a1545407fc0590c38bbba07f5d5511f5d3e4e9bce45c3206d"} Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.812751 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" event={"ID":"1b8b459d-7a00-4e96-8916-4edd9fc87b99","Type":"ContainerStarted","Data":"78e25f2b510934b0bd5b3ada7089ca22e2a2652e8e7bfb1e8e81bc64e2854433"} Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.815055 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" event={"ID":"586011b7-bc23-4a41-8795-bc28910cd170","Type":"ContainerStarted","Data":"b59ce053c8082cc1e42cb72f91c12093af2d246d90831d6513de727e82306c22"} Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.819896 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" event={"ID":"0f7d84eb-b450-4168-b207-22520fed3fd3","Type":"ContainerStarted","Data":"e0ef739332fd43c194106bab2d52cad33b9271ab639540e37296597585d9c784"} Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.820869 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.829412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" event={"ID":"614f63fc-ed66-41bb-b9fe-4229b3b67f50","Type":"ContainerStarted","Data":"0261bbf16a602c1f35f8660cff758fe3f9e9042e3b4752cc140bd7783bb703b7"} Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.831421 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" event={"ID":"962cb597-f461-4983-b37a-a4c9e545f7d8","Type":"ContainerStarted","Data":"c5aea41b24bbfd35dbc306c65a3e8681a75ae912988702b88a461d8133d9cbfa"} Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.831754 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.839739 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.854576 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf" podStartSLOduration=2.336359935 podStartE2EDuration="12.854555995s" podCreationTimestamp="2026-01-30 08:20:26 +0000 UTC" firstStartedPulling="2026-01-30 08:20:27.569394644 +0000 UTC m=+666.264941753" lastFinishedPulling="2026-01-30 08:20:38.087590704 +0000 UTC m=+676.783137813" observedRunningTime="2026-01-30 08:20:38.848134523 +0000 UTC m=+677.543681632" watchObservedRunningTime="2026-01-30 08:20:38.854555995 +0000 UTC m=+677.550103104" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.916324 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hj2pn" podStartSLOduration=2.117184174 podStartE2EDuration="12.916302126s" podCreationTimestamp="2026-01-30 08:20:26 +0000 UTC" firstStartedPulling="2026-01-30 08:20:27.288382299 +0000 UTC m=+665.983929408" lastFinishedPulling="2026-01-30 08:20:38.087500261 +0000 UTC m=+676.783047360" observedRunningTime="2026-01-30 08:20:38.915006855 +0000 UTC m=+677.610553994" watchObservedRunningTime="2026-01-30 08:20:38.916302126 +0000 UTC m=+677.611849235" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.920718 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf5558b74-9clj8" podStartSLOduration=2.323220791 podStartE2EDuration="12.920704354s" podCreationTimestamp="2026-01-30 08:20:26 +0000 UTC" firstStartedPulling="2026-01-30 08:20:27.511118761 +0000 UTC m=+666.206665870" lastFinishedPulling="2026-01-30 08:20:38.108602314 +0000 UTC m=+676.804149433" observedRunningTime="2026-01-30 08:20:38.897103271 +0000 UTC m=+677.592650390" watchObservedRunningTime="2026-01-30 08:20:38.920704354 +0000 UTC m=+677.616251463" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.949829 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" podStartSLOduration=1.686725998 podStartE2EDuration="11.949814059s" podCreationTimestamp="2026-01-30 08:20:27 +0000 UTC" firstStartedPulling="2026-01-30 08:20:27.845026398 +0000 UTC m=+666.540573507" lastFinishedPulling="2026-01-30 08:20:38.108114449 +0000 UTC m=+676.803661568" observedRunningTime="2026-01-30 08:20:38.947477955 +0000 UTC m=+677.643025064" watchObservedRunningTime="2026-01-30 08:20:38.949814059 +0000 UTC m=+677.645361168" Jan 30 08:20:38 crc kubenswrapper[4870]: I0130 08:20:38.970956 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-lv4dk" podStartSLOduration=2.7014626809999998 podStartE2EDuration="12.970932703s" podCreationTimestamp="2026-01-30 08:20:26 +0000 UTC" firstStartedPulling="2026-01-30 08:20:27.897689843 +0000 UTC m=+666.593236952" lastFinishedPulling="2026-01-30 08:20:38.167159865 +0000 UTC m=+676.862706974" observedRunningTime="2026-01-30 08:20:38.967113672 +0000 UTC m=+677.662660791" watchObservedRunningTime="2026-01-30 08:20:38.970932703 +0000 UTC m=+677.666479812" Jan 30 08:20:47 crc kubenswrapper[4870]: I0130 08:20:47.572306 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tmzq2" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.469136 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m"] Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.470688 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.472126 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.482612 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m"] Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.610971 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.611022 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476gr\" (UniqueName: \"kubernetes.io/projected/e702b53f-5799-4595-b78f-35717f81379f-kube-api-access-476gr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.611069 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.712920 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.713339 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.713461 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.713639 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476gr\" (UniqueName: \"kubernetes.io/projected/e702b53f-5799-4595-b78f-35717f81379f-kube-api-access-476gr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.714057 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.744047 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476gr\" (UniqueName: \"kubernetes.io/projected/e702b53f-5799-4595-b78f-35717f81379f-kube-api-access-476gr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:06 crc kubenswrapper[4870]: I0130 08:21:06.787475 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:07 crc kubenswrapper[4870]: I0130 08:21:07.205539 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m"] Jan 30 08:21:08 crc kubenswrapper[4870]: I0130 08:21:08.072118 4870 generic.go:334] "Generic (PLEG): container finished" podID="e702b53f-5799-4595-b78f-35717f81379f" containerID="c46a8edcce2da87cc92f64727d7837f9580526bbaf0396b8eec4ad0aa5f7fb93" exitCode=0 Jan 30 08:21:08 crc kubenswrapper[4870]: I0130 08:21:08.072202 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" event={"ID":"e702b53f-5799-4595-b78f-35717f81379f","Type":"ContainerDied","Data":"c46a8edcce2da87cc92f64727d7837f9580526bbaf0396b8eec4ad0aa5f7fb93"} Jan 30 08:21:08 crc kubenswrapper[4870]: I0130 08:21:08.072532 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" event={"ID":"e702b53f-5799-4595-b78f-35717f81379f","Type":"ContainerStarted","Data":"d70c233e62faf05cebf8fa034b0cebb431e07a2724119a9ba26f415c197ffaf7"} Jan 30 08:21:10 crc kubenswrapper[4870]: I0130 08:21:10.086357 4870 generic.go:334] "Generic (PLEG): container finished" podID="e702b53f-5799-4595-b78f-35717f81379f" containerID="fccb5b271e00dc4a065ddb224096d7eeb02d6ecf4f0e199b42da8ec5a3715ff8" exitCode=0 Jan 30 08:21:10 crc kubenswrapper[4870]: I0130 08:21:10.087622 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" event={"ID":"e702b53f-5799-4595-b78f-35717f81379f","Type":"ContainerDied","Data":"fccb5b271e00dc4a065ddb224096d7eeb02d6ecf4f0e199b42da8ec5a3715ff8"} Jan 30 08:21:11 crc kubenswrapper[4870]: I0130 08:21:11.095463 4870 generic.go:334] "Generic (PLEG): container finished" podID="e702b53f-5799-4595-b78f-35717f81379f" containerID="e4448b4f1e874ff8ed7d10c11ffe633ce3a9be9f2572f7e289aa86cd17295676" exitCode=0 Jan 30 08:21:11 crc kubenswrapper[4870]: I0130 08:21:11.095518 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" event={"ID":"e702b53f-5799-4595-b78f-35717f81379f","Type":"ContainerDied","Data":"e4448b4f1e874ff8ed7d10c11ffe633ce3a9be9f2572f7e289aa86cd17295676"} Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.437651 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.626980 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-bundle\") pod \"e702b53f-5799-4595-b78f-35717f81379f\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.627043 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-476gr\" (UniqueName: \"kubernetes.io/projected/e702b53f-5799-4595-b78f-35717f81379f-kube-api-access-476gr\") pod \"e702b53f-5799-4595-b78f-35717f81379f\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.627091 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-util\") pod \"e702b53f-5799-4595-b78f-35717f81379f\" (UID: \"e702b53f-5799-4595-b78f-35717f81379f\") " Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.628297 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-bundle" (OuterVolumeSpecName: "bundle") pod "e702b53f-5799-4595-b78f-35717f81379f" (UID: "e702b53f-5799-4595-b78f-35717f81379f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.638160 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e702b53f-5799-4595-b78f-35717f81379f-kube-api-access-476gr" (OuterVolumeSpecName: "kube-api-access-476gr") pod "e702b53f-5799-4595-b78f-35717f81379f" (UID: "e702b53f-5799-4595-b78f-35717f81379f"). InnerVolumeSpecName "kube-api-access-476gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.728540 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.728587 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-476gr\" (UniqueName: \"kubernetes.io/projected/e702b53f-5799-4595-b78f-35717f81379f-kube-api-access-476gr\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.921483 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-util" (OuterVolumeSpecName: "util") pod "e702b53f-5799-4595-b78f-35717f81379f" (UID: "e702b53f-5799-4595-b78f-35717f81379f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:21:12 crc kubenswrapper[4870]: I0130 08:21:12.930917 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e702b53f-5799-4595-b78f-35717f81379f-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:13 crc kubenswrapper[4870]: I0130 08:21:13.127647 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" event={"ID":"e702b53f-5799-4595-b78f-35717f81379f","Type":"ContainerDied","Data":"d70c233e62faf05cebf8fa034b0cebb431e07a2724119a9ba26f415c197ffaf7"} Jan 30 08:21:13 crc kubenswrapper[4870]: I0130 08:21:13.127714 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70c233e62faf05cebf8fa034b0cebb431e07a2724119a9ba26f415c197ffaf7" Jan 30 08:21:13 crc kubenswrapper[4870]: I0130 08:21:13.127773 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.915763 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sf8qk"] Jan 30 08:21:17 crc kubenswrapper[4870]: E0130 08:21:17.916603 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="pull" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.916620 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="pull" Jan 30 08:21:17 crc kubenswrapper[4870]: E0130 08:21:17.916630 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="util" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.916639 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="util" Jan 30 08:21:17 crc kubenswrapper[4870]: E0130 08:21:17.916652 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="extract" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.916661 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="extract" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.916797 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e702b53f-5799-4595-b78f-35717f81379f" containerName="extract" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.917341 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.919498 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.919578 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.920153 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jpb8x" Jan 30 08:21:17 crc kubenswrapper[4870]: I0130 08:21:17.929312 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sf8qk"] Jan 30 08:21:18 crc kubenswrapper[4870]: I0130 08:21:18.113849 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7dq\" (UniqueName: \"kubernetes.io/projected/bdb3e88d-691c-478c-ab03-cc84b8e04ea6-kube-api-access-7s7dq\") pod \"nmstate-operator-646758c888-sf8qk\" (UID: \"bdb3e88d-691c-478c-ab03-cc84b8e04ea6\") " pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" Jan 30 08:21:18 crc kubenswrapper[4870]: I0130 08:21:18.215195 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7dq\" (UniqueName: \"kubernetes.io/projected/bdb3e88d-691c-478c-ab03-cc84b8e04ea6-kube-api-access-7s7dq\") pod \"nmstate-operator-646758c888-sf8qk\" (UID: \"bdb3e88d-691c-478c-ab03-cc84b8e04ea6\") " pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" Jan 30 08:21:18 crc kubenswrapper[4870]: I0130 08:21:18.236264 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7dq\" (UniqueName: \"kubernetes.io/projected/bdb3e88d-691c-478c-ab03-cc84b8e04ea6-kube-api-access-7s7dq\") pod \"nmstate-operator-646758c888-sf8qk\" (UID: \"bdb3e88d-691c-478c-ab03-cc84b8e04ea6\") " pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" Jan 30 08:21:18 crc kubenswrapper[4870]: I0130 08:21:18.271589 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" Jan 30 08:21:18 crc kubenswrapper[4870]: I0130 08:21:18.574470 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-sf8qk"] Jan 30 08:21:18 crc kubenswrapper[4870]: W0130 08:21:18.584487 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdb3e88d_691c_478c_ab03_cc84b8e04ea6.slice/crio-1d8a664e6e2e4fee6cdb103f1e8125d571ea42ae6072e5ac837eb40e12ac0ae2 WatchSource:0}: Error finding container 1d8a664e6e2e4fee6cdb103f1e8125d571ea42ae6072e5ac837eb40e12ac0ae2: Status 404 returned error can't find the container with id 1d8a664e6e2e4fee6cdb103f1e8125d571ea42ae6072e5ac837eb40e12ac0ae2 Jan 30 08:21:19 crc kubenswrapper[4870]: I0130 08:21:19.174581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" event={"ID":"bdb3e88d-691c-478c-ab03-cc84b8e04ea6","Type":"ContainerStarted","Data":"1d8a664e6e2e4fee6cdb103f1e8125d571ea42ae6072e5ac837eb40e12ac0ae2"} Jan 30 08:21:21 crc kubenswrapper[4870]: I0130 08:21:21.189549 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" event={"ID":"bdb3e88d-691c-478c-ab03-cc84b8e04ea6","Type":"ContainerStarted","Data":"3251b84fc254256ff77143b377665fdb285a495e2a4dacb8ee280a918b2a834b"} Jan 30 08:21:21 crc kubenswrapper[4870]: I0130 08:21:21.213294 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-sf8qk" podStartSLOduration=2.048936836 podStartE2EDuration="4.213259418s" podCreationTimestamp="2026-01-30 08:21:17 +0000 UTC" firstStartedPulling="2026-01-30 08:21:18.587736579 +0000 UTC m=+717.283283708" lastFinishedPulling="2026-01-30 08:21:20.752059141 +0000 UTC m=+719.447606290" observedRunningTime="2026-01-30 08:21:21.209441919 +0000 UTC m=+719.904989058" watchObservedRunningTime="2026-01-30 08:21:21.213259418 +0000 UTC m=+719.908806567" Jan 30 08:21:25 crc kubenswrapper[4870]: I0130 08:21:25.249778 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:21:25 crc kubenswrapper[4870]: I0130 08:21:25.250560 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.752823 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xdc74"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.753776 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.759133 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hvv85" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.767317 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xdc74"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.792465 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.793434 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.796214 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.796766 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.808614 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tnl9h"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.809378 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.856694 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zfb2\" (UniqueName: \"kubernetes.io/projected/86d16b9b-390e-442a-a74f-a9e32e92da59-kube-api-access-4zfb2\") pod \"nmstate-metrics-54757c584b-xdc74\" (UID: \"86d16b9b-390e-442a-a74f-a9e32e92da59\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.917061 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.917716 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.922592 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.922665 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.935256 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9"] Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.938844 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-9j4m5" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957568 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zfb2\" (UniqueName: \"kubernetes.io/projected/86d16b9b-390e-442a-a74f-a9e32e92da59-kube-api-access-4zfb2\") pod \"nmstate-metrics-54757c584b-xdc74\" (UID: \"86d16b9b-390e-442a-a74f-a9e32e92da59\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957613 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-ovs-socket\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957681 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxfj\" (UniqueName: \"kubernetes.io/projected/f38692e7-8fd1-48e1-ab3b-07cbac975021-kube-api-access-dfxfj\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-dbus-socket\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957726 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-nmstate-lock\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957750 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p524q\" (UniqueName: \"kubernetes.io/projected/06799197-023a-4ed3-a378-9a1fbf25fda2-kube-api-access-p524q\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.957768 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/06799197-023a-4ed3-a378-9a1fbf25fda2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:27 crc kubenswrapper[4870]: I0130 08:21:27.987702 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zfb2\" (UniqueName: \"kubernetes.io/projected/86d16b9b-390e-442a-a74f-a9e32e92da59-kube-api-access-4zfb2\") pod \"nmstate-metrics-54757c584b-xdc74\" (UID: \"86d16b9b-390e-442a-a74f-a9e32e92da59\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059294 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-nmstate-lock\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059345 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p524q\" (UniqueName: \"kubernetes.io/projected/06799197-023a-4ed3-a378-9a1fbf25fda2-kube-api-access-p524q\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059366 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/06799197-023a-4ed3-a378-9a1fbf25fda2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059411 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-ovs-socket\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059450 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwh7d\" (UniqueName: \"kubernetes.io/projected/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-kube-api-access-zwh7d\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059476 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059510 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxfj\" (UniqueName: \"kubernetes.io/projected/f38692e7-8fd1-48e1-ab3b-07cbac975021-kube-api-access-dfxfj\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: E0130 08:21:28.059518 4870 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059530 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059563 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-ovs-socket\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: E0130 08:21:28.059651 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06799197-023a-4ed3-a378-9a1fbf25fda2-tls-key-pair podName:06799197-023a-4ed3-a378-9a1fbf25fda2 nodeName:}" failed. No retries permitted until 2026-01-30 08:21:28.559632392 +0000 UTC m=+727.255179501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/06799197-023a-4ed3-a378-9a1fbf25fda2-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rsk45" (UID: "06799197-023a-4ed3-a378-9a1fbf25fda2") : secret "openshift-nmstate-webhook" not found Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059675 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-dbus-socket\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.059683 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-nmstate-lock\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.060042 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f38692e7-8fd1-48e1-ab3b-07cbac975021-dbus-socket\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.071318 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.082389 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxfj\" (UniqueName: \"kubernetes.io/projected/f38692e7-8fd1-48e1-ab3b-07cbac975021-kube-api-access-dfxfj\") pod \"nmstate-handler-tnl9h\" (UID: \"f38692e7-8fd1-48e1-ab3b-07cbac975021\") " pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.100982 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p524q\" (UniqueName: \"kubernetes.io/projected/06799197-023a-4ed3-a378-9a1fbf25fda2-kube-api-access-p524q\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.143427 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.160424 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwh7d\" (UniqueName: \"kubernetes.io/projected/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-kube-api-access-zwh7d\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.160475 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.160517 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.161696 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.170268 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.175339 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5897df5b9-8hs5t"] Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.176363 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.197970 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5897df5b9-8hs5t"] Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.200925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwh7d\" (UniqueName: \"kubernetes.io/projected/b7e9a284-8b5c-4ae7-b388-3e9f907082d2-kube-api-access-zwh7d\") pod \"nmstate-console-plugin-7754f76f8b-ql9j9\" (UID: \"b7e9a284-8b5c-4ae7-b388-3e9f907082d2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.239447 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265512 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-trusted-ca-bundle\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265548 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-oauth-config\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265570 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-service-ca\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265587 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-config\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265635 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skr2v\" (UniqueName: \"kubernetes.io/projected/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-kube-api-access-skr2v\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265678 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-serving-cert\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.265706 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-oauth-serving-cert\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.366859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-trusted-ca-bundle\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.366906 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-oauth-config\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.366926 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-service-ca\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.366943 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-config\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.366986 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skr2v\" (UniqueName: \"kubernetes.io/projected/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-kube-api-access-skr2v\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.367021 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-serving-cert\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.367060 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-oauth-serving-cert\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.367888 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-oauth-serving-cert\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.368112 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-config\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.368492 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-service-ca\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.368682 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-trusted-ca-bundle\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.372643 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-serving-cert\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.372981 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-console-oauth-config\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.385853 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skr2v\" (UniqueName: \"kubernetes.io/projected/6eacb293-6cdf-4bfa-ad11-c81ea261a90c-kube-api-access-skr2v\") pod \"console-5897df5b9-8hs5t\" (UID: \"6eacb293-6cdf-4bfa-ad11-c81ea261a90c\") " pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.405974 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xdc74"] Jan 30 08:21:28 crc kubenswrapper[4870]: W0130 08:21:28.453344 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d16b9b_390e_442a_a74f_a9e32e92da59.slice/crio-358e4514ffb84330fd25e69ee0dc892c8d14eda97624943fd2053139b44896ef WatchSource:0}: Error finding container 358e4514ffb84330fd25e69ee0dc892c8d14eda97624943fd2053139b44896ef: Status 404 returned error can't find the container with id 358e4514ffb84330fd25e69ee0dc892c8d14eda97624943fd2053139b44896ef Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.553782 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.569355 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/06799197-023a-4ed3-a378-9a1fbf25fda2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.572651 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/06799197-023a-4ed3-a378-9a1fbf25fda2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rsk45\" (UID: \"06799197-023a-4ed3-a378-9a1fbf25fda2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.585514 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9"] Jan 30 08:21:28 crc kubenswrapper[4870]: W0130 08:21:28.596545 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7e9a284_8b5c_4ae7_b388_3e9f907082d2.slice/crio-0ef8512f36f7725ccd0d4431188276aa474d003ed9ff559065ba6e4a430cec4e WatchSource:0}: Error finding container 0ef8512f36f7725ccd0d4431188276aa474d003ed9ff559065ba6e4a430cec4e: Status 404 returned error can't find the container with id 0ef8512f36f7725ccd0d4431188276aa474d003ed9ff559065ba6e4a430cec4e Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.722347 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.769765 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5897df5b9-8hs5t"] Jan 30 08:21:28 crc kubenswrapper[4870]: I0130 08:21:28.961792 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45"] Jan 30 08:21:28 crc kubenswrapper[4870]: W0130 08:21:28.968443 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06799197_023a_4ed3_a378_9a1fbf25fda2.slice/crio-4710f89c399c1b9235af97b8274073cc7b4d5dbf3832d3061f5bc9c987aa10ba WatchSource:0}: Error finding container 4710f89c399c1b9235af97b8274073cc7b4d5dbf3832d3061f5bc9c987aa10ba: Status 404 returned error can't find the container with id 4710f89c399c1b9235af97b8274073cc7b4d5dbf3832d3061f5bc9c987aa10ba Jan 30 08:21:29 crc kubenswrapper[4870]: I0130 08:21:29.253063 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5897df5b9-8hs5t" event={"ID":"6eacb293-6cdf-4bfa-ad11-c81ea261a90c","Type":"ContainerStarted","Data":"e055e94b96e4216d6559e74e36e9cd294ac30f34705e81bec5f402a7223bc3a9"} Jan 30 08:21:29 crc kubenswrapper[4870]: I0130 08:21:29.253392 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5897df5b9-8hs5t" event={"ID":"6eacb293-6cdf-4bfa-ad11-c81ea261a90c","Type":"ContainerStarted","Data":"10a27ce2c850faa744d614b32c949e7a8f6a20dfd82ff067bf94e290381b71f6"} Jan 30 08:21:29 crc kubenswrapper[4870]: I0130 08:21:29.255763 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tnl9h" event={"ID":"f38692e7-8fd1-48e1-ab3b-07cbac975021","Type":"ContainerStarted","Data":"714384e6f35d8667a2c0ec2049a76dedac05d249c1bf5d72b57cf6212965cfa8"} Jan 30 08:21:29 crc kubenswrapper[4870]: I0130 08:21:29.257184 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" event={"ID":"06799197-023a-4ed3-a378-9a1fbf25fda2","Type":"ContainerStarted","Data":"4710f89c399c1b9235af97b8274073cc7b4d5dbf3832d3061f5bc9c987aa10ba"} Jan 30 08:21:29 crc kubenswrapper[4870]: I0130 08:21:29.258786 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" event={"ID":"86d16b9b-390e-442a-a74f-a9e32e92da59","Type":"ContainerStarted","Data":"358e4514ffb84330fd25e69ee0dc892c8d14eda97624943fd2053139b44896ef"} Jan 30 08:21:29 crc kubenswrapper[4870]: I0130 08:21:29.260241 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" event={"ID":"b7e9a284-8b5c-4ae7-b388-3e9f907082d2","Type":"ContainerStarted","Data":"0ef8512f36f7725ccd0d4431188276aa474d003ed9ff559065ba6e4a430cec4e"} Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.113550 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5897df5b9-8hs5t" podStartSLOduration=4.113531443 podStartE2EDuration="4.113531443s" podCreationTimestamp="2026-01-30 08:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:21:29.282453007 +0000 UTC m=+727.978000136" watchObservedRunningTime="2026-01-30 08:21:32.113531443 +0000 UTC m=+730.809078562" Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.284380 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tnl9h" event={"ID":"f38692e7-8fd1-48e1-ab3b-07cbac975021","Type":"ContainerStarted","Data":"41083f37094394c08c285d7c08f4c84f3894891efff7c6dd2c28be0016027952"} Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.284733 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.286010 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" event={"ID":"06799197-023a-4ed3-a378-9a1fbf25fda2","Type":"ContainerStarted","Data":"1f845c3c8beb7dddd18c70f163a7aa11918010fc775f0a1a1d0a80c939beacfb"} Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.286160 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.287582 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" event={"ID":"86d16b9b-390e-442a-a74f-a9e32e92da59","Type":"ContainerStarted","Data":"75ecda3d83b35afa0c77ab887b7cfc05296e8eb903762fb7374a7f71db88c907"} Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.289304 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" event={"ID":"b7e9a284-8b5c-4ae7-b388-3e9f907082d2","Type":"ContainerStarted","Data":"a23a103ca806d886d2eb48f5f9debd758377d6a78ef7d2c350b6e804e3c01268"} Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.303409 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tnl9h" podStartSLOduration=2.119997817 podStartE2EDuration="5.303378985s" podCreationTimestamp="2026-01-30 08:21:27 +0000 UTC" firstStartedPulling="2026-01-30 08:21:28.303024408 +0000 UTC m=+726.998571517" lastFinishedPulling="2026-01-30 08:21:31.486405536 +0000 UTC m=+730.181952685" observedRunningTime="2026-01-30 08:21:32.301365082 +0000 UTC m=+730.996912231" watchObservedRunningTime="2026-01-30 08:21:32.303378985 +0000 UTC m=+730.998926104" Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.320480 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-ql9j9" podStartSLOduration=2.455715438 podStartE2EDuration="5.320459993s" podCreationTimestamp="2026-01-30 08:21:27 +0000 UTC" firstStartedPulling="2026-01-30 08:21:28.601153096 +0000 UTC m=+727.296700205" lastFinishedPulling="2026-01-30 08:21:31.465897641 +0000 UTC m=+730.161444760" observedRunningTime="2026-01-30 08:21:32.319165341 +0000 UTC m=+731.014712440" watchObservedRunningTime="2026-01-30 08:21:32.320459993 +0000 UTC m=+731.016007102" Jan 30 08:21:32 crc kubenswrapper[4870]: I0130 08:21:32.352273 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" podStartSLOduration=2.836967319 podStartE2EDuration="5.352250782s" podCreationTimestamp="2026-01-30 08:21:27 +0000 UTC" firstStartedPulling="2026-01-30 08:21:28.971103703 +0000 UTC m=+727.666650812" lastFinishedPulling="2026-01-30 08:21:31.486387126 +0000 UTC m=+730.181934275" observedRunningTime="2026-01-30 08:21:32.346816191 +0000 UTC m=+731.042363320" watchObservedRunningTime="2026-01-30 08:21:32.352250782 +0000 UTC m=+731.047797901" Jan 30 08:21:34 crc kubenswrapper[4870]: I0130 08:21:34.307416 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" event={"ID":"86d16b9b-390e-442a-a74f-a9e32e92da59","Type":"ContainerStarted","Data":"4d4ac78af20139f6a6ff96b6b70747a75a04e971452dbdf329654029c1e3adc6"} Jan 30 08:21:34 crc kubenswrapper[4870]: I0130 08:21:34.328677 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-xdc74" podStartSLOduration=1.934031157 podStartE2EDuration="7.328651472s" podCreationTimestamp="2026-01-30 08:21:27 +0000 UTC" firstStartedPulling="2026-01-30 08:21:28.456042561 +0000 UTC m=+727.151589670" lastFinishedPulling="2026-01-30 08:21:33.850662856 +0000 UTC m=+732.546209985" observedRunningTime="2026-01-30 08:21:34.324022626 +0000 UTC m=+733.019569735" watchObservedRunningTime="2026-01-30 08:21:34.328651472 +0000 UTC m=+733.024198581" Jan 30 08:21:38 crc kubenswrapper[4870]: I0130 08:21:38.220850 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tnl9h" Jan 30 08:21:38 crc kubenswrapper[4870]: I0130 08:21:38.554870 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:38 crc kubenswrapper[4870]: I0130 08:21:38.554999 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:38 crc kubenswrapper[4870]: I0130 08:21:38.560755 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:39 crc kubenswrapper[4870]: I0130 08:21:39.353559 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5897df5b9-8hs5t" Jan 30 08:21:39 crc kubenswrapper[4870]: I0130 08:21:39.424669 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2mj87"] Jan 30 08:21:48 crc kubenswrapper[4870]: I0130 08:21:48.731285 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rsk45" Jan 30 08:21:55 crc kubenswrapper[4870]: I0130 08:21:55.249270 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:21:55 crc kubenswrapper[4870]: I0130 08:21:55.250293 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:22:04 crc kubenswrapper[4870]: I0130 08:22:04.507600 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2mj87" podUID="2aa49ce7-f902-408a-94f1-da14a661e813" containerName="console" containerID="cri-o://8f4187e8ca6a92ee4bd9e6838556b7bbedaba64d18a5aff0c37ced233ebdc3dd" gracePeriod=15 Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.571639 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2mj87_2aa49ce7-f902-408a-94f1-da14a661e813/console/0.log" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.572120 4870 generic.go:334] "Generic (PLEG): container finished" podID="2aa49ce7-f902-408a-94f1-da14a661e813" containerID="8f4187e8ca6a92ee4bd9e6838556b7bbedaba64d18a5aff0c37ced233ebdc3dd" exitCode=2 Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.572161 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mj87" event={"ID":"2aa49ce7-f902-408a-94f1-da14a661e813","Type":"ContainerDied","Data":"8f4187e8ca6a92ee4bd9e6838556b7bbedaba64d18a5aff0c37ced233ebdc3dd"} Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.577224 4870 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.749348 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2mj87_2aa49ce7-f902-408a-94f1-da14a661e813/console/0.log" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.749431 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.777767 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-oauth-config\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.777852 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-console-config\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.777916 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-service-ca\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.777953 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-trusted-ca-bundle\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.777999 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-serving-cert\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.778037 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-oauth-serving-cert\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.778067 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct6hw\" (UniqueName: \"kubernetes.io/projected/2aa49ce7-f902-408a-94f1-da14a661e813-kube-api-access-ct6hw\") pod \"2aa49ce7-f902-408a-94f1-da14a661e813\" (UID: \"2aa49ce7-f902-408a-94f1-da14a661e813\") " Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.778828 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.778942 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-service-ca" (OuterVolumeSpecName: "service-ca") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.779344 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.779559 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-console-config" (OuterVolumeSpecName: "console-config") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.786616 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.787513 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.802587 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa49ce7-f902-408a-94f1-da14a661e813-kube-api-access-ct6hw" (OuterVolumeSpecName: "kube-api-access-ct6hw") pod "2aa49ce7-f902-408a-94f1-da14a661e813" (UID: "2aa49ce7-f902-408a-94f1-da14a661e813"). InnerVolumeSpecName "kube-api-access-ct6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879568 4870 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879603 4870 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879613 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879622 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879631 4870 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa49ce7-f902-408a-94f1-da14a661e813-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879640 4870 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2aa49ce7-f902-408a-94f1-da14a661e813-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:05 crc kubenswrapper[4870]: I0130 08:22:05.879649 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct6hw\" (UniqueName: \"kubernetes.io/projected/2aa49ce7-f902-408a-94f1-da14a661e813-kube-api-access-ct6hw\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.276470 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp"] Jan 30 08:22:06 crc kubenswrapper[4870]: E0130 08:22:06.277283 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa49ce7-f902-408a-94f1-da14a661e813" containerName="console" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.277380 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa49ce7-f902-408a-94f1-da14a661e813" containerName="console" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.277628 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa49ce7-f902-408a-94f1-da14a661e813" containerName="console" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.278971 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.283231 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.284113 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5k4\" (UniqueName: \"kubernetes.io/projected/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-kube-api-access-nm5k4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.284286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.284339 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.298472 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp"] Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.385608 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.386145 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.386411 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5k4\" (UniqueName: \"kubernetes.io/projected/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-kube-api-access-nm5k4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.386555 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.386667 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.404631 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5k4\" (UniqueName: \"kubernetes.io/projected/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-kube-api-access-nm5k4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.581099 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2mj87_2aa49ce7-f902-408a-94f1-da14a661e813/console/0.log" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.581175 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2mj87" event={"ID":"2aa49ce7-f902-408a-94f1-da14a661e813","Type":"ContainerDied","Data":"11900425e10bfa9bf6c9c649d5dac8048b1ed7e104a45655b98935b712a80d21"} Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.581231 4870 scope.go:117] "RemoveContainer" containerID="8f4187e8ca6a92ee4bd9e6838556b7bbedaba64d18a5aff0c37ced233ebdc3dd" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.581264 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2mj87" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.593481 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.619142 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2mj87"] Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.631424 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2mj87"] Jan 30 08:22:06 crc kubenswrapper[4870]: I0130 08:22:06.900848 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp"] Jan 30 08:22:07 crc kubenswrapper[4870]: I0130 08:22:07.590823 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerID="23ad735f0ffeebd3480c7cdebe5a8540768ddd6875662b4db45bf411655e8342" exitCode=0 Jan 30 08:22:07 crc kubenswrapper[4870]: I0130 08:22:07.590950 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" event={"ID":"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef","Type":"ContainerDied","Data":"23ad735f0ffeebd3480c7cdebe5a8540768ddd6875662b4db45bf411655e8342"} Jan 30 08:22:07 crc kubenswrapper[4870]: I0130 08:22:07.591213 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" event={"ID":"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef","Type":"ContainerStarted","Data":"34734030669c64ebec5619500a21ee834583e2523ce16b72917ee829c9a330c2"} Jan 30 08:22:08 crc kubenswrapper[4870]: I0130 08:22:08.083629 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa49ce7-f902-408a-94f1-da14a661e813" path="/var/lib/kubelet/pods/2aa49ce7-f902-408a-94f1-da14a661e813/volumes" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.613384 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerID="ea03883aaea5dae7986706ea4e1c998aca37b9c10769ae0311753a2237efe194" exitCode=0 Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.613928 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" event={"ID":"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef","Type":"ContainerDied","Data":"ea03883aaea5dae7986706ea4e1c998aca37b9c10769ae0311753a2237efe194"} Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.614821 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gt6dl"] Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.616832 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.627713 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gt6dl"] Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.789795 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-utilities\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.789901 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2hf\" (UniqueName: \"kubernetes.io/projected/d56954f4-c2bc-42a1-bfa9-51433acd8c15-kube-api-access-9x2hf\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.790020 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-catalog-content\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.891033 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-utilities\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.891404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2hf\" (UniqueName: \"kubernetes.io/projected/d56954f4-c2bc-42a1-bfa9-51433acd8c15-kube-api-access-9x2hf\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.891453 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-catalog-content\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.891625 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-utilities\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.892135 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-catalog-content\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:09 crc kubenswrapper[4870]: I0130 08:22:09.925125 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2hf\" (UniqueName: \"kubernetes.io/projected/d56954f4-c2bc-42a1-bfa9-51433acd8c15-kube-api-access-9x2hf\") pod \"redhat-operators-gt6dl\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.036648 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.305373 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gt6dl"] Jan 30 08:22:10 crc kubenswrapper[4870]: W0130 08:22:10.314496 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56954f4_c2bc_42a1_bfa9_51433acd8c15.slice/crio-16fd0a4741adbeb0be8f4ebb1f22a43c1cc7a74b395ee8549eea57d0f522edff WatchSource:0}: Error finding container 16fd0a4741adbeb0be8f4ebb1f22a43c1cc7a74b395ee8549eea57d0f522edff: Status 404 returned error can't find the container with id 16fd0a4741adbeb0be8f4ebb1f22a43c1cc7a74b395ee8549eea57d0f522edff Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.622109 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerID="ff6385d320f64d2aa5da3b6fecd0209ec974596a453947b1b96d5f37c7910d99" exitCode=0 Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.622198 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" event={"ID":"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef","Type":"ContainerDied","Data":"ff6385d320f64d2aa5da3b6fecd0209ec974596a453947b1b96d5f37c7910d99"} Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.624318 4870 generic.go:334] "Generic (PLEG): container finished" podID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerID="86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c" exitCode=0 Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.624384 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerDied","Data":"86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c"} Jan 30 08:22:10 crc kubenswrapper[4870]: I0130 08:22:10.624427 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerStarted","Data":"16fd0a4741adbeb0be8f4ebb1f22a43c1cc7a74b395ee8549eea57d0f522edff"} Jan 30 08:22:11 crc kubenswrapper[4870]: I0130 08:22:11.637576 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerStarted","Data":"eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067"} Jan 30 08:22:11 crc kubenswrapper[4870]: I0130 08:22:11.962511 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.139783 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-bundle\") pod \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.139863 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm5k4\" (UniqueName: \"kubernetes.io/projected/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-kube-api-access-nm5k4\") pod \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.139952 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-util\") pod \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\" (UID: \"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef\") " Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.141808 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-bundle" (OuterVolumeSpecName: "bundle") pod "f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" (UID: "f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.157813 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-kube-api-access-nm5k4" (OuterVolumeSpecName: "kube-api-access-nm5k4") pod "f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" (UID: "f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef"). InnerVolumeSpecName "kube-api-access-nm5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.170589 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-util" (OuterVolumeSpecName: "util") pod "f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" (UID: "f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.242137 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.242191 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm5k4\" (UniqueName: \"kubernetes.io/projected/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-kube-api-access-nm5k4\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.242213 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.649198 4870 generic.go:334] "Generic (PLEG): container finished" podID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerID="eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067" exitCode=0 Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.650436 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerDied","Data":"eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067"} Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.656040 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" event={"ID":"f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef","Type":"ContainerDied","Data":"34734030669c64ebec5619500a21ee834583e2523ce16b72917ee829c9a330c2"} Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.656102 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34734030669c64ebec5619500a21ee834583e2523ce16b72917ee829c9a330c2" Jan 30 08:22:12 crc kubenswrapper[4870]: I0130 08:22:12.656227 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp" Jan 30 08:22:13 crc kubenswrapper[4870]: I0130 08:22:13.678198 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerStarted","Data":"cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3"} Jan 30 08:22:20 crc kubenswrapper[4870]: I0130 08:22:20.037711 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:20 crc kubenswrapper[4870]: I0130 08:22:20.038356 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.102145 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gt6dl" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="registry-server" probeResult="failure" output=< Jan 30 08:22:21 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:22:21 crc kubenswrapper[4870]: > Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.494054 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gt6dl" podStartSLOduration=9.987654669 podStartE2EDuration="12.494029781s" podCreationTimestamp="2026-01-30 08:22:09 +0000 UTC" firstStartedPulling="2026-01-30 08:22:10.626054238 +0000 UTC m=+769.321601347" lastFinishedPulling="2026-01-30 08:22:13.13242931 +0000 UTC m=+771.827976459" observedRunningTime="2026-01-30 08:22:13.707471108 +0000 UTC m=+772.403018217" watchObservedRunningTime="2026-01-30 08:22:21.494029781 +0000 UTC m=+780.189576890" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.496636 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-567987c4fc-ff527"] Jan 30 08:22:21 crc kubenswrapper[4870]: E0130 08:22:21.496955 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="util" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.496974 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="util" Jan 30 08:22:21 crc kubenswrapper[4870]: E0130 08:22:21.496988 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="pull" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.496995 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="pull" Jan 30 08:22:21 crc kubenswrapper[4870]: E0130 08:22:21.497014 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="extract" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.497021 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="extract" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.497178 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef" containerName="extract" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.497696 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.500648 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.500921 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.500986 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-svhct" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.501325 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.501354 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.529656 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-567987c4fc-ff527"] Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.684065 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rqb\" (UniqueName: \"kubernetes.io/projected/70a9e498-4f2a-40ff-8837-7811ffe26e2d-kube-api-access-62rqb\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.684166 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70a9e498-4f2a-40ff-8837-7811ffe26e2d-apiservice-cert\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.684227 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70a9e498-4f2a-40ff-8837-7811ffe26e2d-webhook-cert\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.785952 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70a9e498-4f2a-40ff-8837-7811ffe26e2d-apiservice-cert\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.786051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70a9e498-4f2a-40ff-8837-7811ffe26e2d-webhook-cert\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.786091 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62rqb\" (UniqueName: \"kubernetes.io/projected/70a9e498-4f2a-40ff-8837-7811ffe26e2d-kube-api-access-62rqb\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.796061 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70a9e498-4f2a-40ff-8837-7811ffe26e2d-apiservice-cert\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.808187 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70a9e498-4f2a-40ff-8837-7811ffe26e2d-webhook-cert\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.815981 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62rqb\" (UniqueName: \"kubernetes.io/projected/70a9e498-4f2a-40ff-8837-7811ffe26e2d-kube-api-access-62rqb\") pod \"metallb-operator-controller-manager-567987c4fc-ff527\" (UID: \"70a9e498-4f2a-40ff-8837-7811ffe26e2d\") " pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.851040 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt"] Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.852144 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.859373 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.859442 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.867936 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5nv88" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.887505 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt"] Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.990086 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f01bc9ba-9427-4c0a-927e-56b20aca72c5-webhook-cert\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.990136 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q726w\" (UniqueName: \"kubernetes.io/projected/f01bc9ba-9427-4c0a-927e-56b20aca72c5-kube-api-access-q726w\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:21 crc kubenswrapper[4870]: I0130 08:22:21.990180 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f01bc9ba-9427-4c0a-927e-56b20aca72c5-apiservice-cert\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.091124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f01bc9ba-9427-4c0a-927e-56b20aca72c5-webhook-cert\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.091173 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q726w\" (UniqueName: \"kubernetes.io/projected/f01bc9ba-9427-4c0a-927e-56b20aca72c5-kube-api-access-q726w\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.091220 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f01bc9ba-9427-4c0a-927e-56b20aca72c5-apiservice-cert\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.096972 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.107888 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q726w\" (UniqueName: \"kubernetes.io/projected/f01bc9ba-9427-4c0a-927e-56b20aca72c5-kube-api-access-q726w\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.107959 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f01bc9ba-9427-4c0a-927e-56b20aca72c5-apiservice-cert\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.109555 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f01bc9ba-9427-4c0a-927e-56b20aca72c5-webhook-cert\") pod \"metallb-operator-webhook-server-d5db5fbbd-k8pwt\" (UID: \"f01bc9ba-9427-4c0a-927e-56b20aca72c5\") " pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.116336 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-svhct" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.124422 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.172548 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5nv88" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.181079 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.473530 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt"] Jan 30 08:22:22 crc kubenswrapper[4870]: W0130 08:22:22.490082 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01bc9ba_9427_4c0a_927e_56b20aca72c5.slice/crio-9549964e10582ed8ca13342baf610dd7408f97e3f2ba3743c7d0e8388d2b49a0 WatchSource:0}: Error finding container 9549964e10582ed8ca13342baf610dd7408f97e3f2ba3743c7d0e8388d2b49a0: Status 404 returned error can't find the container with id 9549964e10582ed8ca13342baf610dd7408f97e3f2ba3743c7d0e8388d2b49a0 Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.558185 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-567987c4fc-ff527"] Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.736090 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" event={"ID":"70a9e498-4f2a-40ff-8837-7811ffe26e2d","Type":"ContainerStarted","Data":"c7ccb18c107754ef3074b7af2eceba1094f5baf988020718b4e2e2e61eed0fb2"} Jan 30 08:22:22 crc kubenswrapper[4870]: I0130 08:22:22.737019 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" event={"ID":"f01bc9ba-9427-4c0a-927e-56b20aca72c5","Type":"ContainerStarted","Data":"9549964e10582ed8ca13342baf610dd7408f97e3f2ba3743c7d0e8388d2b49a0"} Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.249211 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.249672 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.249714 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.250105 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f05305445b605660ea999aab22b621a1da0c30929b1af3251f46624decd30be"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.250155 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://8f05305445b605660ea999aab22b621a1da0c30929b1af3251f46624decd30be" gracePeriod=600 Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.781066 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="8f05305445b605660ea999aab22b621a1da0c30929b1af3251f46624decd30be" exitCode=0 Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.781117 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"8f05305445b605660ea999aab22b621a1da0c30929b1af3251f46624decd30be"} Jan 30 08:22:25 crc kubenswrapper[4870]: I0130 08:22:25.781159 4870 scope.go:117] "RemoveContainer" containerID="f6c12e6d68c222de0711d262165234642f23c035c994270d4c786852d266f7a2" Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.823532 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" event={"ID":"f01bc9ba-9427-4c0a-927e-56b20aca72c5","Type":"ContainerStarted","Data":"ecc3d94a340fe5228a9072bfacd4d01f19a2b249b4f96ed382ae75b45ba4c200"} Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.824024 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.825493 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" event={"ID":"70a9e498-4f2a-40ff-8837-7811ffe26e2d","Type":"ContainerStarted","Data":"98496fd06f8eabaefdd0852a92ad197c462f38deda3bdb41f6c1fdf198b25ff5"} Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.825620 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.828209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"902c9bb8b96922377d8d6da6fb79b392e9bc4a710daf7c3c1a77d5b9c2b536ac"} Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.849641 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" podStartSLOduration=2.608422007 podStartE2EDuration="7.849617815s" podCreationTimestamp="2026-01-30 08:22:21 +0000 UTC" firstStartedPulling="2026-01-30 08:22:22.512075434 +0000 UTC m=+781.207622533" lastFinishedPulling="2026-01-30 08:22:27.753271202 +0000 UTC m=+786.448818341" observedRunningTime="2026-01-30 08:22:28.843459092 +0000 UTC m=+787.539006211" watchObservedRunningTime="2026-01-30 08:22:28.849617815 +0000 UTC m=+787.545164934" Jan 30 08:22:28 crc kubenswrapper[4870]: I0130 08:22:28.873937 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" podStartSLOduration=2.743526362 podStartE2EDuration="7.873909368s" podCreationTimestamp="2026-01-30 08:22:21 +0000 UTC" firstStartedPulling="2026-01-30 08:22:22.600647267 +0000 UTC m=+781.296194376" lastFinishedPulling="2026-01-30 08:22:27.731030233 +0000 UTC m=+786.426577382" observedRunningTime="2026-01-30 08:22:28.869689145 +0000 UTC m=+787.565236304" watchObservedRunningTime="2026-01-30 08:22:28.873909368 +0000 UTC m=+787.569456497" Jan 30 08:22:30 crc kubenswrapper[4870]: I0130 08:22:30.102549 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:30 crc kubenswrapper[4870]: I0130 08:22:30.160002 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:30 crc kubenswrapper[4870]: I0130 08:22:30.409220 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gt6dl"] Jan 30 08:22:31 crc kubenswrapper[4870]: I0130 08:22:31.857970 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gt6dl" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="registry-server" containerID="cri-o://cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3" gracePeriod=2 Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.337527 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.455977 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2hf\" (UniqueName: \"kubernetes.io/projected/d56954f4-c2bc-42a1-bfa9-51433acd8c15-kube-api-access-9x2hf\") pod \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.456039 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-utilities\") pod \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.456083 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-catalog-content\") pod \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\" (UID: \"d56954f4-c2bc-42a1-bfa9-51433acd8c15\") " Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.456992 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-utilities" (OuterVolumeSpecName: "utilities") pod "d56954f4-c2bc-42a1-bfa9-51433acd8c15" (UID: "d56954f4-c2bc-42a1-bfa9-51433acd8c15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.469598 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56954f4-c2bc-42a1-bfa9-51433acd8c15-kube-api-access-9x2hf" (OuterVolumeSpecName: "kube-api-access-9x2hf") pod "d56954f4-c2bc-42a1-bfa9-51433acd8c15" (UID: "d56954f4-c2bc-42a1-bfa9-51433acd8c15"). InnerVolumeSpecName "kube-api-access-9x2hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.558036 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2hf\" (UniqueName: \"kubernetes.io/projected/d56954f4-c2bc-42a1-bfa9-51433acd8c15-kube-api-access-9x2hf\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.558076 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.570529 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d56954f4-c2bc-42a1-bfa9-51433acd8c15" (UID: "d56954f4-c2bc-42a1-bfa9-51433acd8c15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.659415 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d56954f4-c2bc-42a1-bfa9-51433acd8c15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.885075 4870 generic.go:334] "Generic (PLEG): container finished" podID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerID="cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3" exitCode=0 Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.885156 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerDied","Data":"cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3"} Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.885217 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gt6dl" event={"ID":"d56954f4-c2bc-42a1-bfa9-51433acd8c15","Type":"ContainerDied","Data":"16fd0a4741adbeb0be8f4ebb1f22a43c1cc7a74b395ee8549eea57d0f522edff"} Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.885230 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gt6dl" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.885252 4870 scope.go:117] "RemoveContainer" containerID="cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.919108 4870 scope.go:117] "RemoveContainer" containerID="eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.935216 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gt6dl"] Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.939289 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gt6dl"] Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.961207 4870 scope.go:117] "RemoveContainer" containerID="86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.984813 4870 scope.go:117] "RemoveContainer" containerID="cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3" Jan 30 08:22:32 crc kubenswrapper[4870]: E0130 08:22:32.985489 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3\": container with ID starting with cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3 not found: ID does not exist" containerID="cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.985581 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3"} err="failed to get container status \"cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3\": rpc error: code = NotFound desc = could not find container \"cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3\": container with ID starting with cb0008e33f9dfe8f12fe1500f43e595181b49501dfd1a6a4cb9b1e2bb46080c3 not found: ID does not exist" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.985642 4870 scope.go:117] "RemoveContainer" containerID="eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067" Jan 30 08:22:32 crc kubenswrapper[4870]: E0130 08:22:32.986326 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067\": container with ID starting with eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067 not found: ID does not exist" containerID="eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.986417 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067"} err="failed to get container status \"eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067\": rpc error: code = NotFound desc = could not find container \"eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067\": container with ID starting with eba381f30e92b7fbaef86ea5a4e2e0d1066c124c884cb019360f295ad221d067 not found: ID does not exist" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.986483 4870 scope.go:117] "RemoveContainer" containerID="86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c" Jan 30 08:22:32 crc kubenswrapper[4870]: E0130 08:22:32.986923 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c\": container with ID starting with 86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c not found: ID does not exist" containerID="86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c" Jan 30 08:22:32 crc kubenswrapper[4870]: I0130 08:22:32.986968 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c"} err="failed to get container status \"86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c\": rpc error: code = NotFound desc = could not find container \"86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c\": container with ID starting with 86dedfe79832a79b748c29ec54e77e723fd9e1f44571d75dfdd4f0e65591ae8c not found: ID does not exist" Jan 30 08:22:34 crc kubenswrapper[4870]: I0130 08:22:34.083556 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" path="/var/lib/kubelet/pods/d56954f4-c2bc-42a1-bfa9-51433acd8c15/volumes" Jan 30 08:22:42 crc kubenswrapper[4870]: I0130 08:22:42.184702 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d5db5fbbd-k8pwt" Jan 30 08:23:02 crc kubenswrapper[4870]: I0130 08:23:02.128395 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-567987c4fc-ff527" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.004719 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zwhkv"] Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.005329 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="registry-server" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.005342 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="registry-server" Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.005361 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="extract-utilities" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.005367 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="extract-utilities" Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.005375 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="extract-content" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.005381 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="extract-content" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.005491 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56954f4-c2bc-42a1-bfa9-51433acd8c15" containerName="registry-server" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.007380 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.010946 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-v8xl8" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.014620 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.019690 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.021126 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd"] Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.022464 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.023982 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.055570 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd"] Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.117073 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7q5pn"] Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.124221 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.127218 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.129565 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.130037 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pjwvb" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.130057 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160100 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-metrics\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160290 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzx2p\" (UniqueName: \"kubernetes.io/projected/5d3d6557-5b19-47c3-9e81-09b8dee3b239-kube-api-access-tzx2p\") pod \"frr-k8s-webhook-server-7df86c4f6c-pnddd\" (UID: \"5d3d6557-5b19-47c3-9e81-09b8dee3b239\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160410 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94sp\" (UniqueName: \"kubernetes.io/projected/008f589d-dab4-42af-9a42-cb6c00737f44-kube-api-access-n94sp\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160472 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/008f589d-dab4-42af-9a42-cb6c00737f44-metrics-certs\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160546 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d3d6557-5b19-47c3-9e81-09b8dee3b239-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pnddd\" (UID: \"5d3d6557-5b19-47c3-9e81-09b8dee3b239\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160641 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/008f589d-dab4-42af-9a42-cb6c00737f44-frr-startup\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-frr-conf\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.160781 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-reloader\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.161028 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-frr-sockets\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.152502 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-2dwrk"] Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.165800 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.173261 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.229863 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-2dwrk"] Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263354 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-metrics-certs\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263497 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n94sp\" (UniqueName: \"kubernetes.io/projected/008f589d-dab4-42af-9a42-cb6c00737f44-kube-api-access-n94sp\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263580 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/008f589d-dab4-42af-9a42-cb6c00737f44-metrics-certs\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263621 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d3d6557-5b19-47c3-9e81-09b8dee3b239-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pnddd\" (UID: \"5d3d6557-5b19-47c3-9e81-09b8dee3b239\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263688 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/008f589d-dab4-42af-9a42-cb6c00737f44-frr-startup\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263740 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmxn\" (UniqueName: \"kubernetes.io/projected/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-kube-api-access-brmxn\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263776 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-frr-conf\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263820 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69ng\" (UniqueName: \"kubernetes.io/projected/84099c66-a13e-4949-ae36-7fa85a6a6a56-kube-api-access-j69ng\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263837 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-metrics-certs\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263856 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-reloader\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263932 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-frr-sockets\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.263986 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/84099c66-a13e-4949-ae36-7fa85a6a6a56-metallb-excludel2\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.264051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-metrics\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.264094 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.264130 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-cert\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.264158 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzx2p\" (UniqueName: \"kubernetes.io/projected/5d3d6557-5b19-47c3-9e81-09b8dee3b239-kube-api-access-tzx2p\") pod \"frr-k8s-webhook-server-7df86c4f6c-pnddd\" (UID: \"5d3d6557-5b19-47c3-9e81-09b8dee3b239\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.264820 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-reloader\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.265042 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-frr-sockets\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.265227 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-metrics\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.265931 4870 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.266196 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/008f589d-dab4-42af-9a42-cb6c00737f44-metrics-certs podName:008f589d-dab4-42af-9a42-cb6c00737f44 nodeName:}" failed. No retries permitted until 2026-01-30 08:23:03.76618111 +0000 UTC m=+822.461728219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/008f589d-dab4-42af-9a42-cb6c00737f44-metrics-certs") pod "frr-k8s-zwhkv" (UID: "008f589d-dab4-42af-9a42-cb6c00737f44") : secret "frr-k8s-certs-secret" not found Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.266813 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/008f589d-dab4-42af-9a42-cb6c00737f44-frr-conf\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.266950 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/008f589d-dab4-42af-9a42-cb6c00737f44-frr-startup\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.288186 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d3d6557-5b19-47c3-9e81-09b8dee3b239-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pnddd\" (UID: \"5d3d6557-5b19-47c3-9e81-09b8dee3b239\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.291401 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n94sp\" (UniqueName: \"kubernetes.io/projected/008f589d-dab4-42af-9a42-cb6c00737f44-kube-api-access-n94sp\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.291951 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzx2p\" (UniqueName: \"kubernetes.io/projected/5d3d6557-5b19-47c3-9e81-09b8dee3b239-kube-api-access-tzx2p\") pod \"frr-k8s-webhook-server-7df86c4f6c-pnddd\" (UID: \"5d3d6557-5b19-47c3-9e81-09b8dee3b239\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.336948 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365494 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/84099c66-a13e-4949-ae36-7fa85a6a6a56-metallb-excludel2\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365541 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365561 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-cert\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365585 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-metrics-certs\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365644 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmxn\" (UniqueName: \"kubernetes.io/projected/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-kube-api-access-brmxn\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365667 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69ng\" (UniqueName: \"kubernetes.io/projected/84099c66-a13e-4949-ae36-7fa85a6a6a56-kube-api-access-j69ng\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.365684 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-metrics-certs\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.365797 4870 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.365834 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-metrics-certs podName:b8c43bdb-2bfa-445b-9526-a03eb3f3ca20 nodeName:}" failed. No retries permitted until 2026-01-30 08:23:03.865822031 +0000 UTC m=+822.561369140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-metrics-certs") pod "controller-6968d8fdc4-2dwrk" (UID: "b8c43bdb-2bfa-445b-9526-a03eb3f3ca20") : secret "controller-certs-secret" not found Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.366475 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/84099c66-a13e-4949-ae36-7fa85a6a6a56-metallb-excludel2\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.366535 4870 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.366563 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist podName:84099c66-a13e-4949-ae36-7fa85a6a6a56 nodeName:}" failed. No retries permitted until 2026-01-30 08:23:03.866554254 +0000 UTC m=+822.562101363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist") pod "speaker-7q5pn" (UID: "84099c66-a13e-4949-ae36-7fa85a6a6a56") : secret "metallb-memberlist" not found Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.366624 4870 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.366648 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-metrics-certs podName:84099c66-a13e-4949-ae36-7fa85a6a6a56 nodeName:}" failed. No retries permitted until 2026-01-30 08:23:03.866642086 +0000 UTC m=+822.562189195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-metrics-certs") pod "speaker-7q5pn" (UID: "84099c66-a13e-4949-ae36-7fa85a6a6a56") : secret "speaker-certs-secret" not found Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.368204 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.381791 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-cert\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.381955 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69ng\" (UniqueName: \"kubernetes.io/projected/84099c66-a13e-4949-ae36-7fa85a6a6a56-kube-api-access-j69ng\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.385807 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmxn\" (UniqueName: \"kubernetes.io/projected/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-kube-api-access-brmxn\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.766821 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd"] Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.772291 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/008f589d-dab4-42af-9a42-cb6c00737f44-metrics-certs\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.781989 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/008f589d-dab4-42af-9a42-cb6c00737f44-metrics-certs\") pod \"frr-k8s-zwhkv\" (UID: \"008f589d-dab4-42af-9a42-cb6c00737f44\") " pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.874374 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-metrics-certs\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.874988 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.875053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-metrics-certs\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.875338 4870 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 08:23:03 crc kubenswrapper[4870]: E0130 08:23:03.875493 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist podName:84099c66-a13e-4949-ae36-7fa85a6a6a56 nodeName:}" failed. No retries permitted until 2026-01-30 08:23:04.875453242 +0000 UTC m=+823.571000531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist") pod "speaker-7q5pn" (UID: "84099c66-a13e-4949-ae36-7fa85a6a6a56") : secret "metallb-memberlist" not found Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.881236 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-metrics-certs\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.883211 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c43bdb-2bfa-445b-9526-a03eb3f3ca20-metrics-certs\") pod \"controller-6968d8fdc4-2dwrk\" (UID: \"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20\") " pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:03 crc kubenswrapper[4870]: I0130 08:23:03.928436 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.118011 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.165742 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"ecb2ffa0215951bd5b68430a9bc7995140a05fff24a8628b6b8cb7567e4ca4d8"} Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.168146 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" event={"ID":"5d3d6557-5b19-47c3-9e81-09b8dee3b239","Type":"ContainerStarted","Data":"ad98ce5aaf450a211c769bbcf26d2bc9f218e7a3e5f4b71d8e217eb142b2ca51"} Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.395738 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-2dwrk"] Jan 30 08:23:04 crc kubenswrapper[4870]: W0130 08:23:04.401319 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8c43bdb_2bfa_445b_9526_a03eb3f3ca20.slice/crio-21aa7840d46a8d8787a99c8ae73d2ccf695c014bd734ffff4dcdb62190100de7 WatchSource:0}: Error finding container 21aa7840d46a8d8787a99c8ae73d2ccf695c014bd734ffff4dcdb62190100de7: Status 404 returned error can't find the container with id 21aa7840d46a8d8787a99c8ae73d2ccf695c014bd734ffff4dcdb62190100de7 Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.891646 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.900450 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/84099c66-a13e-4949-ae36-7fa85a6a6a56-memberlist\") pod \"speaker-7q5pn\" (UID: \"84099c66-a13e-4949-ae36-7fa85a6a6a56\") " pod="metallb-system/speaker-7q5pn" Jan 30 08:23:04 crc kubenswrapper[4870]: I0130 08:23:04.952997 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7q5pn" Jan 30 08:23:04 crc kubenswrapper[4870]: W0130 08:23:04.980348 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84099c66_a13e_4949_ae36_7fa85a6a6a56.slice/crio-85e4c1cf49f7964f2023663bbe40b88939b1fa679ece7a89b6a3698459539c7c WatchSource:0}: Error finding container 85e4c1cf49f7964f2023663bbe40b88939b1fa679ece7a89b6a3698459539c7c: Status 404 returned error can't find the container with id 85e4c1cf49f7964f2023663bbe40b88939b1fa679ece7a89b6a3698459539c7c Jan 30 08:23:05 crc kubenswrapper[4870]: I0130 08:23:05.186941 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2dwrk" event={"ID":"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20","Type":"ContainerStarted","Data":"3d5c87be1abee24970012c8766e657f259c3ff7ee82cb8e4682fee1e6545daf3"} Jan 30 08:23:05 crc kubenswrapper[4870]: I0130 08:23:05.187003 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2dwrk" event={"ID":"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20","Type":"ContainerStarted","Data":"c48c352b27effa7cbb1874f70695798e077923427c77846470ddc075dbfa8dea"} Jan 30 08:23:05 crc kubenswrapper[4870]: I0130 08:23:05.187022 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-2dwrk" event={"ID":"b8c43bdb-2bfa-445b-9526-a03eb3f3ca20","Type":"ContainerStarted","Data":"21aa7840d46a8d8787a99c8ae73d2ccf695c014bd734ffff4dcdb62190100de7"} Jan 30 08:23:05 crc kubenswrapper[4870]: I0130 08:23:05.188346 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:05 crc kubenswrapper[4870]: I0130 08:23:05.205793 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7q5pn" event={"ID":"84099c66-a13e-4949-ae36-7fa85a6a6a56","Type":"ContainerStarted","Data":"85e4c1cf49f7964f2023663bbe40b88939b1fa679ece7a89b6a3698459539c7c"} Jan 30 08:23:05 crc kubenswrapper[4870]: I0130 08:23:05.229979 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-2dwrk" podStartSLOduration=2.229963615 podStartE2EDuration="2.229963615s" podCreationTimestamp="2026-01-30 08:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:23:05.227457857 +0000 UTC m=+823.923004966" watchObservedRunningTime="2026-01-30 08:23:05.229963615 +0000 UTC m=+823.925510724" Jan 30 08:23:06 crc kubenswrapper[4870]: I0130 08:23:06.242106 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7q5pn" event={"ID":"84099c66-a13e-4949-ae36-7fa85a6a6a56","Type":"ContainerStarted","Data":"f5e3a2f7ff44e533b19e51c89160d612b7d96a53b8be859640b1eb341b44a9af"} Jan 30 08:23:06 crc kubenswrapper[4870]: I0130 08:23:06.246603 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7q5pn" event={"ID":"84099c66-a13e-4949-ae36-7fa85a6a6a56","Type":"ContainerStarted","Data":"abaf353f709893509b53cd49a35ad8650090d541b7050cd5cc67e814e0605ad6"} Jan 30 08:23:06 crc kubenswrapper[4870]: I0130 08:23:06.246628 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7q5pn" Jan 30 08:23:06 crc kubenswrapper[4870]: I0130 08:23:06.273072 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7q5pn" podStartSLOduration=3.273046584 podStartE2EDuration="3.273046584s" podCreationTimestamp="2026-01-30 08:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:23:06.262789673 +0000 UTC m=+824.958336782" watchObservedRunningTime="2026-01-30 08:23:06.273046584 +0000 UTC m=+824.968593693" Jan 30 08:23:12 crc kubenswrapper[4870]: I0130 08:23:12.321124 4870 generic.go:334] "Generic (PLEG): container finished" podID="008f589d-dab4-42af-9a42-cb6c00737f44" containerID="80cbadd69312293fca500c2c971a875ea76ef3bdb2f21bf6a70ff5ad0a4f6a30" exitCode=0 Jan 30 08:23:12 crc kubenswrapper[4870]: I0130 08:23:12.321292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerDied","Data":"80cbadd69312293fca500c2c971a875ea76ef3bdb2f21bf6a70ff5ad0a4f6a30"} Jan 30 08:23:12 crc kubenswrapper[4870]: I0130 08:23:12.325830 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" event={"ID":"5d3d6557-5b19-47c3-9e81-09b8dee3b239","Type":"ContainerStarted","Data":"b5d9070f6ddcf0d0673a8ea448988136752b8a01ebb223d220e0464931ee2560"} Jan 30 08:23:12 crc kubenswrapper[4870]: I0130 08:23:12.326142 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:12 crc kubenswrapper[4870]: I0130 08:23:12.400170 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" podStartSLOduration=1.386660182 podStartE2EDuration="9.400125134s" podCreationTimestamp="2026-01-30 08:23:03 +0000 UTC" firstStartedPulling="2026-01-30 08:23:03.779133386 +0000 UTC m=+822.474680485" lastFinishedPulling="2026-01-30 08:23:11.792598328 +0000 UTC m=+830.488145437" observedRunningTime="2026-01-30 08:23:12.392720731 +0000 UTC m=+831.088267870" watchObservedRunningTime="2026-01-30 08:23:12.400125134 +0000 UTC m=+831.095672273" Jan 30 08:23:13 crc kubenswrapper[4870]: I0130 08:23:13.339571 4870 generic.go:334] "Generic (PLEG): container finished" podID="008f589d-dab4-42af-9a42-cb6c00737f44" containerID="198af9d63563856ccb8d6a572a13ca85cf978f2430ad5c5123d0f79a257559fc" exitCode=0 Jan 30 08:23:13 crc kubenswrapper[4870]: I0130 08:23:13.341454 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerDied","Data":"198af9d63563856ccb8d6a572a13ca85cf978f2430ad5c5123d0f79a257559fc"} Jan 30 08:23:14 crc kubenswrapper[4870]: I0130 08:23:14.125354 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-2dwrk" Jan 30 08:23:14 crc kubenswrapper[4870]: I0130 08:23:14.348985 4870 generic.go:334] "Generic (PLEG): container finished" podID="008f589d-dab4-42af-9a42-cb6c00737f44" containerID="85498cf689e4c6dc227af410f98c3d3dbcdfc33806253794730e19a8da67ed36" exitCode=0 Jan 30 08:23:14 crc kubenswrapper[4870]: I0130 08:23:14.349034 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerDied","Data":"85498cf689e4c6dc227af410f98c3d3dbcdfc33806253794730e19a8da67ed36"} Jan 30 08:23:15 crc kubenswrapper[4870]: I0130 08:23:15.366975 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"80d5a89ce3d77979dfd983dff4ec762a001f3cdc86340cb3e3c2005d602d6d60"} Jan 30 08:23:15 crc kubenswrapper[4870]: I0130 08:23:15.367433 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"c33a2a2501e909f95713117af712fcc9abab5a57e7972491254d6b729622c353"} Jan 30 08:23:15 crc kubenswrapper[4870]: I0130 08:23:15.367453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"ab2d48e04b8b9d44659b0227037a829e8ed5c20ebfb3d9502a2c7000338fc051"} Jan 30 08:23:15 crc kubenswrapper[4870]: I0130 08:23:15.367471 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"60fb3eb637725398e2b738818807814acdc09a14628bd0ee1e8ce3900f9231a8"} Jan 30 08:23:15 crc kubenswrapper[4870]: I0130 08:23:15.367488 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"7ad5e234345a49675c16ae95e5800d919b2fc4f07402c7d3e128db23c50c6726"} Jan 30 08:23:16 crc kubenswrapper[4870]: I0130 08:23:16.382986 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwhkv" event={"ID":"008f589d-dab4-42af-9a42-cb6c00737f44","Type":"ContainerStarted","Data":"b72cb05bacf644fbbf2323975c9d7ece66c9cc42ea6ce09fec59bee5d456e473"} Jan 30 08:23:16 crc kubenswrapper[4870]: I0130 08:23:16.383626 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:16 crc kubenswrapper[4870]: I0130 08:23:16.424313 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zwhkv" podStartSLOduration=6.753964155 podStartE2EDuration="14.424280007s" podCreationTimestamp="2026-01-30 08:23:02 +0000 UTC" firstStartedPulling="2026-01-30 08:23:04.121101469 +0000 UTC m=+822.816648608" lastFinishedPulling="2026-01-30 08:23:11.791417351 +0000 UTC m=+830.486964460" observedRunningTime="2026-01-30 08:23:16.417996519 +0000 UTC m=+835.113543678" watchObservedRunningTime="2026-01-30 08:23:16.424280007 +0000 UTC m=+835.119827156" Jan 30 08:23:18 crc kubenswrapper[4870]: I0130 08:23:18.929384 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:19 crc kubenswrapper[4870]: I0130 08:23:19.009885 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:23 crc kubenswrapper[4870]: I0130 08:23:23.350529 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pnddd" Jan 30 08:23:24 crc kubenswrapper[4870]: I0130 08:23:24.956868 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7q5pn" Jan 30 08:23:27 crc kubenswrapper[4870]: I0130 08:23:27.956606 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vk67k"] Jan 30 08:23:27 crc kubenswrapper[4870]: I0130 08:23:27.957915 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:27 crc kubenswrapper[4870]: I0130 08:23:27.972987 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 08:23:27 crc kubenswrapper[4870]: I0130 08:23:27.973354 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9lk7j" Jan 30 08:23:27 crc kubenswrapper[4870]: I0130 08:23:27.973781 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 08:23:27 crc kubenswrapper[4870]: I0130 08:23:27.990567 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vk67k"] Jan 30 08:23:28 crc kubenswrapper[4870]: I0130 08:23:28.081379 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjxk\" (UniqueName: \"kubernetes.io/projected/7d303d04-4cbe-4ca5-b134-f2f8312c227d-kube-api-access-stjxk\") pod \"openstack-operator-index-vk67k\" (UID: \"7d303d04-4cbe-4ca5-b134-f2f8312c227d\") " pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:28 crc kubenswrapper[4870]: I0130 08:23:28.183567 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stjxk\" (UniqueName: \"kubernetes.io/projected/7d303d04-4cbe-4ca5-b134-f2f8312c227d-kube-api-access-stjxk\") pod \"openstack-operator-index-vk67k\" (UID: \"7d303d04-4cbe-4ca5-b134-f2f8312c227d\") " pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:28 crc kubenswrapper[4870]: I0130 08:23:28.212999 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stjxk\" (UniqueName: \"kubernetes.io/projected/7d303d04-4cbe-4ca5-b134-f2f8312c227d-kube-api-access-stjxk\") pod \"openstack-operator-index-vk67k\" (UID: \"7d303d04-4cbe-4ca5-b134-f2f8312c227d\") " pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:28 crc kubenswrapper[4870]: I0130 08:23:28.275806 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:28 crc kubenswrapper[4870]: I0130 08:23:28.527280 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vk67k"] Jan 30 08:23:29 crc kubenswrapper[4870]: I0130 08:23:29.510151 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vk67k" event={"ID":"7d303d04-4cbe-4ca5-b134-f2f8312c227d","Type":"ContainerStarted","Data":"4dd9c22d1339c45953edde7fd3b89d1a6b9500ddda6ad95e56185d56d08f2af7"} Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.114688 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vk67k"] Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.723842 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4bccf"] Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.727440 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.750598 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4bccf"] Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.839394 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txn25\" (UniqueName: \"kubernetes.io/projected/c79c7300-5362-40dc-a952-2193e7a6908b-kube-api-access-txn25\") pod \"openstack-operator-index-4bccf\" (UID: \"c79c7300-5362-40dc-a952-2193e7a6908b\") " pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.940834 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txn25\" (UniqueName: \"kubernetes.io/projected/c79c7300-5362-40dc-a952-2193e7a6908b-kube-api-access-txn25\") pod \"openstack-operator-index-4bccf\" (UID: \"c79c7300-5362-40dc-a952-2193e7a6908b\") " pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:31 crc kubenswrapper[4870]: I0130 08:23:31.979606 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txn25\" (UniqueName: \"kubernetes.io/projected/c79c7300-5362-40dc-a952-2193e7a6908b-kube-api-access-txn25\") pod \"openstack-operator-index-4bccf\" (UID: \"c79c7300-5362-40dc-a952-2193e7a6908b\") " pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:32 crc kubenswrapper[4870]: I0130 08:23:32.055667 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:32 crc kubenswrapper[4870]: I0130 08:23:32.325591 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4bccf"] Jan 30 08:23:32 crc kubenswrapper[4870]: I0130 08:23:32.547944 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vk67k" event={"ID":"7d303d04-4cbe-4ca5-b134-f2f8312c227d","Type":"ContainerStarted","Data":"16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205"} Jan 30 08:23:32 crc kubenswrapper[4870]: I0130 08:23:32.548189 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vk67k" podUID="7d303d04-4cbe-4ca5-b134-f2f8312c227d" containerName="registry-server" containerID="cri-o://16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205" gracePeriod=2 Jan 30 08:23:32 crc kubenswrapper[4870]: I0130 08:23:32.553070 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4bccf" event={"ID":"c79c7300-5362-40dc-a952-2193e7a6908b","Type":"ContainerStarted","Data":"2f9a22a68968c2a49c744d38054cc9daa6f102fad302a50844154ba07d842499"} Jan 30 08:23:32 crc kubenswrapper[4870]: I0130 08:23:32.572494 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vk67k" podStartSLOduration=2.333599409 podStartE2EDuration="5.572456501s" podCreationTimestamp="2026-01-30 08:23:27 +0000 UTC" firstStartedPulling="2026-01-30 08:23:28.538248762 +0000 UTC m=+847.233795871" lastFinishedPulling="2026-01-30 08:23:31.777105854 +0000 UTC m=+850.472652963" observedRunningTime="2026-01-30 08:23:32.569839439 +0000 UTC m=+851.265386558" watchObservedRunningTime="2026-01-30 08:23:32.572456501 +0000 UTC m=+851.268003980" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.056377 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.160202 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stjxk\" (UniqueName: \"kubernetes.io/projected/7d303d04-4cbe-4ca5-b134-f2f8312c227d-kube-api-access-stjxk\") pod \"7d303d04-4cbe-4ca5-b134-f2f8312c227d\" (UID: \"7d303d04-4cbe-4ca5-b134-f2f8312c227d\") " Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.170843 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d303d04-4cbe-4ca5-b134-f2f8312c227d-kube-api-access-stjxk" (OuterVolumeSpecName: "kube-api-access-stjxk") pod "7d303d04-4cbe-4ca5-b134-f2f8312c227d" (UID: "7d303d04-4cbe-4ca5-b134-f2f8312c227d"). InnerVolumeSpecName "kube-api-access-stjxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.262588 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stjxk\" (UniqueName: \"kubernetes.io/projected/7d303d04-4cbe-4ca5-b134-f2f8312c227d-kube-api-access-stjxk\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.564474 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4bccf" event={"ID":"c79c7300-5362-40dc-a952-2193e7a6908b","Type":"ContainerStarted","Data":"439e79c69ead461c3d52bd6167951926e409dcda33f4f96731f074ca8e53b78d"} Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.567450 4870 generic.go:334] "Generic (PLEG): container finished" podID="7d303d04-4cbe-4ca5-b134-f2f8312c227d" containerID="16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205" exitCode=0 Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.567525 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vk67k" event={"ID":"7d303d04-4cbe-4ca5-b134-f2f8312c227d","Type":"ContainerDied","Data":"16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205"} Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.567589 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vk67k" event={"ID":"7d303d04-4cbe-4ca5-b134-f2f8312c227d","Type":"ContainerDied","Data":"4dd9c22d1339c45953edde7fd3b89d1a6b9500ddda6ad95e56185d56d08f2af7"} Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.567620 4870 scope.go:117] "RemoveContainer" containerID="16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.567861 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vk67k" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.600368 4870 scope.go:117] "RemoveContainer" containerID="16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205" Jan 30 08:23:33 crc kubenswrapper[4870]: E0130 08:23:33.600859 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205\": container with ID starting with 16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205 not found: ID does not exist" containerID="16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.600948 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205"} err="failed to get container status \"16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205\": rpc error: code = NotFound desc = could not find container \"16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205\": container with ID starting with 16a6a78d6c093d90f00b838a5288df35c2f1a86506f33234e1ab0bc9a5df8205 not found: ID does not exist" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.602050 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4bccf" podStartSLOduration=2.5446929750000002 podStartE2EDuration="2.602030206s" podCreationTimestamp="2026-01-30 08:23:31 +0000 UTC" firstStartedPulling="2026-01-30 08:23:32.340615998 +0000 UTC m=+851.036163107" lastFinishedPulling="2026-01-30 08:23:32.397953229 +0000 UTC m=+851.093500338" observedRunningTime="2026-01-30 08:23:33.595347306 +0000 UTC m=+852.290894455" watchObservedRunningTime="2026-01-30 08:23:33.602030206 +0000 UTC m=+852.297577355" Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.641363 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vk67k"] Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.650006 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vk67k"] Jan 30 08:23:33 crc kubenswrapper[4870]: I0130 08:23:33.938684 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zwhkv" Jan 30 08:23:34 crc kubenswrapper[4870]: I0130 08:23:34.093989 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d303d04-4cbe-4ca5-b134-f2f8312c227d" path="/var/lib/kubelet/pods/7d303d04-4cbe-4ca5-b134-f2f8312c227d/volumes" Jan 30 08:23:42 crc kubenswrapper[4870]: I0130 08:23:42.056252 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:42 crc kubenswrapper[4870]: I0130 08:23:42.056838 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:42 crc kubenswrapper[4870]: I0130 08:23:42.124859 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:42 crc kubenswrapper[4870]: I0130 08:23:42.695461 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4bccf" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.172139 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw"] Jan 30 08:23:43 crc kubenswrapper[4870]: E0130 08:23:43.172420 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d303d04-4cbe-4ca5-b134-f2f8312c227d" containerName="registry-server" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.172432 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d303d04-4cbe-4ca5-b134-f2f8312c227d" containerName="registry-server" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.172551 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d303d04-4cbe-4ca5-b134-f2f8312c227d" containerName="registry-server" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.173435 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.177127 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kcgg7" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.182431 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw"] Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.338696 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-util\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.339687 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-bundle\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.339737 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tbg\" (UniqueName: \"kubernetes.io/projected/6f3c7406-d095-434b-a79a-f24373a9b141-kube-api-access-b8tbg\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.441033 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-util\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.441095 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-bundle\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.441128 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8tbg\" (UniqueName: \"kubernetes.io/projected/6f3c7406-d095-434b-a79a-f24373a9b141-kube-api-access-b8tbg\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.441831 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-util\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.442039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-bundle\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.466766 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8tbg\" (UniqueName: \"kubernetes.io/projected/6f3c7406-d095-434b-a79a-f24373a9b141-kube-api-access-b8tbg\") pod \"538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.493730 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:43 crc kubenswrapper[4870]: I0130 08:23:43.767187 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw"] Jan 30 08:23:44 crc kubenswrapper[4870]: E0130 08:23:44.094144 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f3c7406_d095_434b_a79a_f24373a9b141.slice/crio-fbe751daab30c254bc9548e5b9ad99cb96f0f442bdf9a0bca64985b96e1e512f.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:23:44 crc kubenswrapper[4870]: I0130 08:23:44.678090 4870 generic.go:334] "Generic (PLEG): container finished" podID="6f3c7406-d095-434b-a79a-f24373a9b141" containerID="fbe751daab30c254bc9548e5b9ad99cb96f0f442bdf9a0bca64985b96e1e512f" exitCode=0 Jan 30 08:23:44 crc kubenswrapper[4870]: I0130 08:23:44.678207 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" event={"ID":"6f3c7406-d095-434b-a79a-f24373a9b141","Type":"ContainerDied","Data":"fbe751daab30c254bc9548e5b9ad99cb96f0f442bdf9a0bca64985b96e1e512f"} Jan 30 08:23:44 crc kubenswrapper[4870]: I0130 08:23:44.678561 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" event={"ID":"6f3c7406-d095-434b-a79a-f24373a9b141","Type":"ContainerStarted","Data":"8d58849cc2eca4dbbd8aedc621bd2b4682a82f44b1d861543780b17030ac22a6"} Jan 30 08:23:45 crc kubenswrapper[4870]: I0130 08:23:45.691241 4870 generic.go:334] "Generic (PLEG): container finished" podID="6f3c7406-d095-434b-a79a-f24373a9b141" containerID="83f08d103cec33c1c080b7171ebb52042419b4ce73f6c86cb79eb07b86f85255" exitCode=0 Jan 30 08:23:45 crc kubenswrapper[4870]: I0130 08:23:45.691330 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" event={"ID":"6f3c7406-d095-434b-a79a-f24373a9b141","Type":"ContainerDied","Data":"83f08d103cec33c1c080b7171ebb52042419b4ce73f6c86cb79eb07b86f85255"} Jan 30 08:23:46 crc kubenswrapper[4870]: I0130 08:23:46.702454 4870 generic.go:334] "Generic (PLEG): container finished" podID="6f3c7406-d095-434b-a79a-f24373a9b141" containerID="73a62a4577b0873fb9119073cd13c8c85100124b2cc27968fc48f710ab2d3107" exitCode=0 Jan 30 08:23:46 crc kubenswrapper[4870]: I0130 08:23:46.702540 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" event={"ID":"6f3c7406-d095-434b-a79a-f24373a9b141","Type":"ContainerDied","Data":"73a62a4577b0873fb9119073cd13c8c85100124b2cc27968fc48f710ab2d3107"} Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.157366 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.218159 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-util\") pod \"6f3c7406-d095-434b-a79a-f24373a9b141\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.219104 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8tbg\" (UniqueName: \"kubernetes.io/projected/6f3c7406-d095-434b-a79a-f24373a9b141-kube-api-access-b8tbg\") pod \"6f3c7406-d095-434b-a79a-f24373a9b141\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.219290 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-bundle\") pod \"6f3c7406-d095-434b-a79a-f24373a9b141\" (UID: \"6f3c7406-d095-434b-a79a-f24373a9b141\") " Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.220228 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-bundle" (OuterVolumeSpecName: "bundle") pod "6f3c7406-d095-434b-a79a-f24373a9b141" (UID: "6f3c7406-d095-434b-a79a-f24373a9b141"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.220741 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.228277 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3c7406-d095-434b-a79a-f24373a9b141-kube-api-access-b8tbg" (OuterVolumeSpecName: "kube-api-access-b8tbg") pod "6f3c7406-d095-434b-a79a-f24373a9b141" (UID: "6f3c7406-d095-434b-a79a-f24373a9b141"). InnerVolumeSpecName "kube-api-access-b8tbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.242883 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-util" (OuterVolumeSpecName: "util") pod "6f3c7406-d095-434b-a79a-f24373a9b141" (UID: "6f3c7406-d095-434b-a79a-f24373a9b141"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.322719 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8tbg\" (UniqueName: \"kubernetes.io/projected/6f3c7406-d095-434b-a79a-f24373a9b141-kube-api-access-b8tbg\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.322774 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f3c7406-d095-434b-a79a-f24373a9b141-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.730199 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" event={"ID":"6f3c7406-d095-434b-a79a-f24373a9b141","Type":"ContainerDied","Data":"8d58849cc2eca4dbbd8aedc621bd2b4682a82f44b1d861543780b17030ac22a6"} Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.730260 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d58849cc2eca4dbbd8aedc621bd2b4682a82f44b1d861543780b17030ac22a6" Jan 30 08:23:48 crc kubenswrapper[4870]: I0130 08:23:48.730376 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.724764 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd"] Jan 30 08:23:55 crc kubenswrapper[4870]: E0130 08:23:55.725828 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="pull" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.725845 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="pull" Jan 30 08:23:55 crc kubenswrapper[4870]: E0130 08:23:55.725881 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="util" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.725909 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="util" Jan 30 08:23:55 crc kubenswrapper[4870]: E0130 08:23:55.725923 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="extract" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.725933 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="extract" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.726078 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3c7406-d095-434b-a79a-f24373a9b141" containerName="extract" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.726679 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.729350 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-htm2c" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.753794 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd"] Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.841471 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9j4z\" (UniqueName: \"kubernetes.io/projected/b5c8b38a-bdec-4120-9802-5a35815eca01-kube-api-access-w9j4z\") pod \"openstack-operator-controller-init-594f7f44c-vnpnd\" (UID: \"b5c8b38a-bdec-4120-9802-5a35815eca01\") " pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.942792 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9j4z\" (UniqueName: \"kubernetes.io/projected/b5c8b38a-bdec-4120-9802-5a35815eca01-kube-api-access-w9j4z\") pod \"openstack-operator-controller-init-594f7f44c-vnpnd\" (UID: \"b5c8b38a-bdec-4120-9802-5a35815eca01\") " pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:23:55 crc kubenswrapper[4870]: I0130 08:23:55.961424 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9j4z\" (UniqueName: \"kubernetes.io/projected/b5c8b38a-bdec-4120-9802-5a35815eca01-kube-api-access-w9j4z\") pod \"openstack-operator-controller-init-594f7f44c-vnpnd\" (UID: \"b5c8b38a-bdec-4120-9802-5a35815eca01\") " pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:23:56 crc kubenswrapper[4870]: I0130 08:23:56.051286 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:23:56 crc kubenswrapper[4870]: I0130 08:23:56.365607 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd"] Jan 30 08:23:56 crc kubenswrapper[4870]: I0130 08:23:56.793074 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" event={"ID":"b5c8b38a-bdec-4120-9802-5a35815eca01","Type":"ContainerStarted","Data":"3bbc9db6a3a9845b1560190ce5b6bcc662aa58e84baca5090946339b19e29fa8"} Jan 30 08:24:01 crc kubenswrapper[4870]: I0130 08:24:01.834131 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" event={"ID":"b5c8b38a-bdec-4120-9802-5a35815eca01","Type":"ContainerStarted","Data":"0f0b4cb65752c371320e8731349c6678583cf1a80bf7ad6418a69328cac6897c"} Jan 30 08:24:01 crc kubenswrapper[4870]: I0130 08:24:01.835082 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:24:01 crc kubenswrapper[4870]: I0130 08:24:01.891968 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" podStartSLOduration=2.3779210109999998 podStartE2EDuration="6.891871322s" podCreationTimestamp="2026-01-30 08:23:55 +0000 UTC" firstStartedPulling="2026-01-30 08:23:56.371324108 +0000 UTC m=+875.066871237" lastFinishedPulling="2026-01-30 08:24:00.885274399 +0000 UTC m=+879.580821548" observedRunningTime="2026-01-30 08:24:01.88003965 +0000 UTC m=+880.575586799" watchObservedRunningTime="2026-01-30 08:24:01.891871322 +0000 UTC m=+880.587418461" Jan 30 08:24:06 crc kubenswrapper[4870]: I0130 08:24:06.057845 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-594f7f44c-vnpnd" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.765425 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.767117 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.768929 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-g8h7z" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.772589 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.773540 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.776790 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7764j" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.783999 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.806844 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.812120 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.813279 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.815170 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5th8n" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.818812 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.819887 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.821797 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-gzpfd" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.828931 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqw9\" (UniqueName: \"kubernetes.io/projected/54c01287-d66d-46bc-bbb8-7532263099c5-kube-api-access-5cqw9\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wfpg9\" (UID: \"54c01287-d66d-46bc-bbb8-7532263099c5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.828979 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvmk\" (UniqueName: \"kubernetes.io/projected/96be73fb-f1fc-4c5c-a643-7b9dcc832ac6-kube-api-access-vgvmk\") pod \"glance-operator-controller-manager-8886f4c47-tkrpg\" (UID: \"96be73fb-f1fc-4c5c-a643-7b9dcc832ac6\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.829323 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mh2k\" (UniqueName: \"kubernetes.io/projected/dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb-kube-api-access-4mh2k\") pod \"designate-operator-controller-manager-6d9697b7f4-grbz8\" (UID: \"dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.829422 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrpc\" (UniqueName: \"kubernetes.io/projected/e973c5f3-3291-4d4b-85ce-806ef6f83c1a-kube-api-access-wmrpc\") pod \"cinder-operator-controller-manager-8d874c8fc-hsfpq\" (UID: \"e973c5f3-3291-4d4b-85ce-806ef6f83c1a\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.847281 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.853408 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.893025 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.894634 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.898552 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jtj66" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.910006 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.911517 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.916121 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pnprh" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.928980 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.931348 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrpc\" (UniqueName: \"kubernetes.io/projected/e973c5f3-3291-4d4b-85ce-806ef6f83c1a-kube-api-access-wmrpc\") pod \"cinder-operator-controller-manager-8d874c8fc-hsfpq\" (UID: \"e973c5f3-3291-4d4b-85ce-806ef6f83c1a\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.931441 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqw9\" (UniqueName: \"kubernetes.io/projected/54c01287-d66d-46bc-bbb8-7532263099c5-kube-api-access-5cqw9\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wfpg9\" (UID: \"54c01287-d66d-46bc-bbb8-7532263099c5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.931477 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvmk\" (UniqueName: \"kubernetes.io/projected/96be73fb-f1fc-4c5c-a643-7b9dcc832ac6-kube-api-access-vgvmk\") pod \"glance-operator-controller-manager-8886f4c47-tkrpg\" (UID: \"96be73fb-f1fc-4c5c-a643-7b9dcc832ac6\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.931533 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrl7\" (UniqueName: \"kubernetes.io/projected/925313c0-6800-4a27-814b-887b46cf49ad-kube-api-access-tgrl7\") pod \"horizon-operator-controller-manager-5fb775575f-hbmf7\" (UID: \"925313c0-6800-4a27-814b-887b46cf49ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.931603 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mh2k\" (UniqueName: \"kubernetes.io/projected/dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb-kube-api-access-4mh2k\") pod \"designate-operator-controller-manager-6d9697b7f4-grbz8\" (UID: \"dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.931633 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzglw\" (UniqueName: \"kubernetes.io/projected/b9449ead-e087-4895-a88a-8bdfe0835ebd-kube-api-access-rzglw\") pod \"heat-operator-controller-manager-69d6db494d-wlkxq\" (UID: \"b9449ead-e087-4895-a88a-8bdfe0835ebd\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.933836 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.948940 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-spzcf"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.950079 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.952341 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.952678 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pppwc" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.961487 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.962536 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-spzcf"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.962647 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.969314 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-znfz5" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.971687 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst"] Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.972803 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mh2k\" (UniqueName: \"kubernetes.io/projected/dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb-kube-api-access-4mh2k\") pod \"designate-operator-controller-manager-6d9697b7f4-grbz8\" (UID: \"dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.973001 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.975031 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqw9\" (UniqueName: \"kubernetes.io/projected/54c01287-d66d-46bc-bbb8-7532263099c5-kube-api-access-5cqw9\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wfpg9\" (UID: \"54c01287-d66d-46bc-bbb8-7532263099c5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:26 crc kubenswrapper[4870]: I0130 08:24:26.991941 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mnlnz" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.004678 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.006858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvmk\" (UniqueName: \"kubernetes.io/projected/96be73fb-f1fc-4c5c-a643-7b9dcc832ac6-kube-api-access-vgvmk\") pod \"glance-operator-controller-manager-8886f4c47-tkrpg\" (UID: \"96be73fb-f1fc-4c5c-a643-7b9dcc832ac6\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.009300 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.010180 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.011562 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrpc\" (UniqueName: \"kubernetes.io/projected/e973c5f3-3291-4d4b-85ce-806ef6f83c1a-kube-api-access-wmrpc\") pod \"cinder-operator-controller-manager-8d874c8fc-hsfpq\" (UID: \"e973c5f3-3291-4d4b-85ce-806ef6f83c1a\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.017767 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pbfgq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.033337 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrl7\" (UniqueName: \"kubernetes.io/projected/925313c0-6800-4a27-814b-887b46cf49ad-kube-api-access-tgrl7\") pod \"horizon-operator-controller-manager-5fb775575f-hbmf7\" (UID: \"925313c0-6800-4a27-814b-887b46cf49ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.033449 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzglw\" (UniqueName: \"kubernetes.io/projected/b9449ead-e087-4895-a88a-8bdfe0835ebd-kube-api-access-rzglw\") pod \"heat-operator-controller-manager-69d6db494d-wlkxq\" (UID: \"b9449ead-e087-4895-a88a-8bdfe0835ebd\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.053982 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.062035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzglw\" (UniqueName: \"kubernetes.io/projected/b9449ead-e087-4895-a88a-8bdfe0835ebd-kube-api-access-rzglw\") pod \"heat-operator-controller-manager-69d6db494d-wlkxq\" (UID: \"b9449ead-e087-4895-a88a-8bdfe0835ebd\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.066576 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.078133 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrl7\" (UniqueName: \"kubernetes.io/projected/925313c0-6800-4a27-814b-887b46cf49ad-kube-api-access-tgrl7\") pod \"horizon-operator-controller-manager-5fb775575f-hbmf7\" (UID: \"925313c0-6800-4a27-814b-887b46cf49ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.083499 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.084509 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.088070 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6mztf" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.096133 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.109357 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.109405 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.115492 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.116292 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.124333 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7cjsm" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.138284 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7shhs\" (UniqueName: \"kubernetes.io/projected/db7aeba5-92f5-4887-9a6a-92d8c57650d2-kube-api-access-7shhs\") pod \"keystone-operator-controller-manager-84f48565d4-rhfst\" (UID: \"db7aeba5-92f5-4887-9a6a-92d8c57650d2\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.138391 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b429\" (UniqueName: \"kubernetes.io/projected/ea3efedd-cb74-48c7-b246-b188bac37ed4-kube-api-access-5b429\") pod \"mariadb-operator-controller-manager-67bf948998-59rt2\" (UID: \"ea3efedd-cb74-48c7-b246-b188bac37ed4\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.138441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lt4t\" (UniqueName: \"kubernetes.io/projected/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-kube-api-access-5lt4t\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.138460 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw8fb\" (UniqueName: \"kubernetes.io/projected/5680ceb3-f5ec-4d9e-a313-13564402bff2-kube-api-access-hw8fb\") pod \"ironic-operator-controller-manager-5f4b8bd54d-5vfrj\" (UID: \"5680ceb3-f5ec-4d9e-a313-13564402bff2\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.138496 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.152503 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.157369 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.159639 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.169087 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-55q7v" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.169535 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.218823 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.238750 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.264596 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lt4t\" (UniqueName: \"kubernetes.io/projected/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-kube-api-access-5lt4t\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.264806 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw8fb\" (UniqueName: \"kubernetes.io/projected/5680ceb3-f5ec-4d9e-a313-13564402bff2-kube-api-access-hw8fb\") pod \"ironic-operator-controller-manager-5f4b8bd54d-5vfrj\" (UID: \"5680ceb3-f5ec-4d9e-a313-13564402bff2\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.265098 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.265199 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7shhs\" (UniqueName: \"kubernetes.io/projected/db7aeba5-92f5-4887-9a6a-92d8c57650d2-kube-api-access-7shhs\") pod \"keystone-operator-controller-manager-84f48565d4-rhfst\" (UID: \"db7aeba5-92f5-4887-9a6a-92d8c57650d2\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.272427 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbfg\" (UniqueName: \"kubernetes.io/projected/0ea209e2-96bf-4919-ad8f-f86de2b78ab1-kube-api-access-jlbfg\") pod \"neutron-operator-controller-manager-585dbc889-2xdfh\" (UID: \"0ea209e2-96bf-4919-ad8f-f86de2b78ab1\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.274703 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68fs\" (UniqueName: \"kubernetes.io/projected/5cde6cc5-f427-4349-8c8a-3dce0deac5a9-kube-api-access-h68fs\") pod \"manila-operator-controller-manager-7dd968899f-j9bdn\" (UID: \"5cde6cc5-f427-4349-8c8a-3dce0deac5a9\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:27 crc kubenswrapper[4870]: E0130 08:24:27.278280 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:27 crc kubenswrapper[4870]: E0130 08:24:27.278869 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert podName:46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:27.778721918 +0000 UTC m=+906.474269027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert") pod "infra-operator-controller-manager-79955696d6-spzcf" (UID: "46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.282924 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b429\" (UniqueName: \"kubernetes.io/projected/ea3efedd-cb74-48c7-b246-b188bac37ed4-kube-api-access-5b429\") pod \"mariadb-operator-controller-manager-67bf948998-59rt2\" (UID: \"ea3efedd-cb74-48c7-b246-b188bac37ed4\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.307205 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.313420 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw8fb\" (UniqueName: \"kubernetes.io/projected/5680ceb3-f5ec-4d9e-a313-13564402bff2-kube-api-access-hw8fb\") pod \"ironic-operator-controller-manager-5f4b8bd54d-5vfrj\" (UID: \"5680ceb3-f5ec-4d9e-a313-13564402bff2\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.332989 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7shhs\" (UniqueName: \"kubernetes.io/projected/db7aeba5-92f5-4887-9a6a-92d8c57650d2-kube-api-access-7shhs\") pod \"keystone-operator-controller-manager-84f48565d4-rhfst\" (UID: \"db7aeba5-92f5-4887-9a6a-92d8c57650d2\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.335525 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b429\" (UniqueName: \"kubernetes.io/projected/ea3efedd-cb74-48c7-b246-b188bac37ed4-kube-api-access-5b429\") pod \"mariadb-operator-controller-manager-67bf948998-59rt2\" (UID: \"ea3efedd-cb74-48c7-b246-b188bac37ed4\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.414285 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lt4t\" (UniqueName: \"kubernetes.io/projected/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-kube-api-access-5lt4t\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.420387 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.421390 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.427673 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zlfh7" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.432622 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbfg\" (UniqueName: \"kubernetes.io/projected/0ea209e2-96bf-4919-ad8f-f86de2b78ab1-kube-api-access-jlbfg\") pod \"neutron-operator-controller-manager-585dbc889-2xdfh\" (UID: \"0ea209e2-96bf-4919-ad8f-f86de2b78ab1\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.432704 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68fs\" (UniqueName: \"kubernetes.io/projected/5cde6cc5-f427-4349-8c8a-3dce0deac5a9-kube-api-access-h68fs\") pod \"manila-operator-controller-manager-7dd968899f-j9bdn\" (UID: \"5cde6cc5-f427-4349-8c8a-3dce0deac5a9\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.432745 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65mv\" (UniqueName: \"kubernetes.io/projected/604ff246-0f47-4c2c-8940-d76f10dce14e-kube-api-access-v65mv\") pod \"nova-operator-controller-manager-55bff696bd-cpn6f\" (UID: \"604ff246-0f47-4c2c-8940-d76f10dce14e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.433052 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.479683 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.487609 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.494832 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68fs\" (UniqueName: \"kubernetes.io/projected/5cde6cc5-f427-4349-8c8a-3dce0deac5a9-kube-api-access-h68fs\") pod \"manila-operator-controller-manager-7dd968899f-j9bdn\" (UID: \"5cde6cc5-f427-4349-8c8a-3dce0deac5a9\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.513513 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.530611 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.531752 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.533690 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65mv\" (UniqueName: \"kubernetes.io/projected/604ff246-0f47-4c2c-8940-d76f10dce14e-kube-api-access-v65mv\") pod \"nova-operator-controller-manager-55bff696bd-cpn6f\" (UID: \"604ff246-0f47-4c2c-8940-d76f10dce14e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.533764 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlnhn\" (UniqueName: \"kubernetes.io/projected/be7a26e3-9284-4316-bce7-7bc15c9178bd-kube-api-access-mlnhn\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.533807 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.533839 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmmc\" (UniqueName: \"kubernetes.io/projected/2ee622d2-acd4-4eec-9fbb-12b5bae7e32f-kube-api-access-nmmmc\") pod \"octavia-operator-controller-manager-6687f8d877-4sftq\" (UID: \"2ee622d2-acd4-4eec-9fbb-12b5bae7e32f\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.541188 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rwmjx" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.541455 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.543009 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.548070 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.549198 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.551717 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7jfpq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.556264 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbfg\" (UniqueName: \"kubernetes.io/projected/0ea209e2-96bf-4919-ad8f-f86de2b78ab1-kube-api-access-jlbfg\") pod \"neutron-operator-controller-manager-585dbc889-2xdfh\" (UID: \"0ea209e2-96bf-4919-ad8f-f86de2b78ab1\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.557667 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.565654 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.566031 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.566585 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.568251 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65mv\" (UniqueName: \"kubernetes.io/projected/604ff246-0f47-4c2c-8940-d76f10dce14e-kube-api-access-v65mv\") pod \"nova-operator-controller-manager-55bff696bd-cpn6f\" (UID: \"604ff246-0f47-4c2c-8940-d76f10dce14e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.572989 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vtxs2" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.582619 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-497sn"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.583704 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.588687 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-h2jx7" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.597438 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.602212 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.611103 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.621853 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-497sn"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.632760 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.634206 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.634926 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlnhn\" (UniqueName: \"kubernetes.io/projected/be7a26e3-9284-4316-bce7-7bc15c9178bd-kube-api-access-mlnhn\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.634997 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.635039 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmmc\" (UniqueName: \"kubernetes.io/projected/2ee622d2-acd4-4eec-9fbb-12b5bae7e32f-kube-api-access-nmmmc\") pod \"octavia-operator-controller-manager-6687f8d877-4sftq\" (UID: \"2ee622d2-acd4-4eec-9fbb-12b5bae7e32f\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:27 crc kubenswrapper[4870]: E0130 08:24:27.635243 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:27 crc kubenswrapper[4870]: E0130 08:24:27.635314 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert podName:be7a26e3-9284-4316-bce7-7bc15c9178bd nodeName:}" failed. No retries permitted until 2026-01-30 08:24:28.135292121 +0000 UTC m=+906.830839230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" (UID: "be7a26e3-9284-4316-bce7-7bc15c9178bd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.638948 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7v7hd" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.645984 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.674630 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmmc\" (UniqueName: \"kubernetes.io/projected/2ee622d2-acd4-4eec-9fbb-12b5bae7e32f-kube-api-access-nmmmc\") pod \"octavia-operator-controller-manager-6687f8d877-4sftq\" (UID: \"2ee622d2-acd4-4eec-9fbb-12b5bae7e32f\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.675126 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlnhn\" (UniqueName: \"kubernetes.io/projected/be7a26e3-9284-4316-bce7-7bc15c9178bd-kube-api-access-mlnhn\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.678060 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.682810 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.686250 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mnwss" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.704512 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.717422 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.759968 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxxb\" (UniqueName: \"kubernetes.io/projected/2de7363a-3627-42bb-a58f-7bad2e414192-kube-api-access-5hxxb\") pod \"swift-operator-controller-manager-68fc8c869-497sn\" (UID: \"2de7363a-3627-42bb-a58f-7bad2e414192\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.760038 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgr5f\" (UniqueName: \"kubernetes.io/projected/ec9257db-1c02-4160-9c89-7df62f2ce602-kube-api-access-qgr5f\") pod \"ovn-operator-controller-manager-788c46999f-t4hbm\" (UID: \"ec9257db-1c02-4160-9c89-7df62f2ce602\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.760085 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5dq\" (UniqueName: \"kubernetes.io/projected/274d3a56-3caf-4dd2-b122-e3b45a3eec6e-kube-api-access-2q5dq\") pod \"placement-operator-controller-manager-5b964cf4cd-mx5xp\" (UID: \"274d3a56-3caf-4dd2-b122-e3b45a3eec6e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.760195 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdljv\" (UniqueName: \"kubernetes.io/projected/0319ce7f-95ab-4abf-9101-bf436cc74bf4-kube-api-access-zdljv\") pod \"telemetry-operator-controller-manager-64b5b76f97-bmzrd\" (UID: \"0319ce7f-95ab-4abf-9101-bf436cc74bf4\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.760214 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldkm\" (UniqueName: \"kubernetes.io/projected/378c24d4-b8c1-4cd2-a85c-8449aa00ad3e-kube-api-access-tldkm\") pod \"test-operator-controller-manager-56f8bfcd9f-t8ncr\" (UID: \"378c24d4-b8c1-4cd2-a85c-8449aa00ad3e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.763990 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.765157 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.768863 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2dhf6" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.788730 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.831003 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.875709 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdljv\" (UniqueName: \"kubernetes.io/projected/0319ce7f-95ab-4abf-9101-bf436cc74bf4-kube-api-access-zdljv\") pod \"telemetry-operator-controller-manager-64b5b76f97-bmzrd\" (UID: \"0319ce7f-95ab-4abf-9101-bf436cc74bf4\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.875759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tldkm\" (UniqueName: \"kubernetes.io/projected/378c24d4-b8c1-4cd2-a85c-8449aa00ad3e-kube-api-access-tldkm\") pod \"test-operator-controller-manager-56f8bfcd9f-t8ncr\" (UID: \"378c24d4-b8c1-4cd2-a85c-8449aa00ad3e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.875831 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxxb\" (UniqueName: \"kubernetes.io/projected/2de7363a-3627-42bb-a58f-7bad2e414192-kube-api-access-5hxxb\") pod \"swift-operator-controller-manager-68fc8c869-497sn\" (UID: \"2de7363a-3627-42bb-a58f-7bad2e414192\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.875854 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgr5f\" (UniqueName: \"kubernetes.io/projected/ec9257db-1c02-4160-9c89-7df62f2ce602-kube-api-access-qgr5f\") pod \"ovn-operator-controller-manager-788c46999f-t4hbm\" (UID: \"ec9257db-1c02-4160-9c89-7df62f2ce602\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.875955 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5dq\" (UniqueName: \"kubernetes.io/projected/274d3a56-3caf-4dd2-b122-e3b45a3eec6e-kube-api-access-2q5dq\") pod \"placement-operator-controller-manager-5b964cf4cd-mx5xp\" (UID: \"274d3a56-3caf-4dd2-b122-e3b45a3eec6e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.876008 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:27 crc kubenswrapper[4870]: E0130 08:24:27.876149 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:27 crc kubenswrapper[4870]: E0130 08:24:27.876201 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert podName:46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:28.876186149 +0000 UTC m=+907.571733258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert") pod "infra-operator-controller-manager-79955696d6-spzcf" (UID: "46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.891624 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.892776 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.902407 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.902679 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.902841 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5fr4g" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.942679 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd"] Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.950205 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgr5f\" (UniqueName: \"kubernetes.io/projected/ec9257db-1c02-4160-9c89-7df62f2ce602-kube-api-access-qgr5f\") pod \"ovn-operator-controller-manager-788c46999f-t4hbm\" (UID: \"ec9257db-1c02-4160-9c89-7df62f2ce602\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.951772 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldkm\" (UniqueName: \"kubernetes.io/projected/378c24d4-b8c1-4cd2-a85c-8449aa00ad3e-kube-api-access-tldkm\") pod \"test-operator-controller-manager-56f8bfcd9f-t8ncr\" (UID: \"378c24d4-b8c1-4cd2-a85c-8449aa00ad3e\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.961710 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5dq\" (UniqueName: \"kubernetes.io/projected/274d3a56-3caf-4dd2-b122-e3b45a3eec6e-kube-api-access-2q5dq\") pod \"placement-operator-controller-manager-5b964cf4cd-mx5xp\" (UID: \"274d3a56-3caf-4dd2-b122-e3b45a3eec6e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.964387 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxxb\" (UniqueName: \"kubernetes.io/projected/2de7363a-3627-42bb-a58f-7bad2e414192-kube-api-access-5hxxb\") pod \"swift-operator-controller-manager-68fc8c869-497sn\" (UID: \"2de7363a-3627-42bb-a58f-7bad2e414192\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.964937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdljv\" (UniqueName: \"kubernetes.io/projected/0319ce7f-95ab-4abf-9101-bf436cc74bf4-kube-api-access-zdljv\") pod \"telemetry-operator-controller-manager-64b5b76f97-bmzrd\" (UID: \"0319ce7f-95ab-4abf-9101-bf436cc74bf4\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.979059 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc59p\" (UniqueName: \"kubernetes.io/projected/d6956410-92c0-40bf-b1c1-a3353ccf1bbc-kube-api-access-mc59p\") pod \"watcher-operator-controller-manager-7b7dd57594-2p68v\" (UID: \"d6956410-92c0-40bf-b1c1-a3353ccf1bbc\") " pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:24:27 crc kubenswrapper[4870]: I0130 08:24:27.995825 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v"] Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.004511 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.007516 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-k5dk2" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.008307 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.021077 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v"] Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.050561 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.082440 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.082497 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.082628 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmkf8\" (UniqueName: \"kubernetes.io/projected/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-kube-api-access-kmkf8\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.082746 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc59p\" (UniqueName: \"kubernetes.io/projected/d6956410-92c0-40bf-b1c1-a3353ccf1bbc-kube-api-access-mc59p\") pod \"watcher-operator-controller-manager-7b7dd57594-2p68v\" (UID: \"d6956410-92c0-40bf-b1c1-a3353ccf1bbc\") " pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.101911 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc59p\" (UniqueName: \"kubernetes.io/projected/d6956410-92c0-40bf-b1c1-a3353ccf1bbc-kube-api-access-mc59p\") pod \"watcher-operator-controller-manager-7b7dd57594-2p68v\" (UID: \"d6956410-92c0-40bf-b1c1-a3353ccf1bbc\") " pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.122349 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.195911 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.195973 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5t8\" (UniqueName: \"kubernetes.io/projected/b706cc39-6af6-4a91-b2a2-6160148dadae-kube-api-access-8t5t8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sds6v\" (UID: \"b706cc39-6af6-4a91-b2a2-6160148dadae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.196004 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.196034 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.196071 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmkf8\" (UniqueName: \"kubernetes.io/projected/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-kube-api-access-kmkf8\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.196425 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.196484 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert podName:be7a26e3-9284-4316-bce7-7bc15c9178bd nodeName:}" failed. No retries permitted until 2026-01-30 08:24:29.19646664 +0000 UTC m=+907.892013749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" (UID: "be7a26e3-9284-4316-bce7-7bc15c9178bd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.196531 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.196573 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:28.696557973 +0000 UTC m=+907.392105082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.196612 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.196631 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:28.696625855 +0000 UTC m=+907.392172964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "metrics-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.216480 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmkf8\" (UniqueName: \"kubernetes.io/projected/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-kube-api-access-kmkf8\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.262506 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.300168 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t5t8\" (UniqueName: \"kubernetes.io/projected/b706cc39-6af6-4a91-b2a2-6160148dadae-kube-api-access-8t5t8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sds6v\" (UID: \"b706cc39-6af6-4a91-b2a2-6160148dadae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.322294 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t5t8\" (UniqueName: \"kubernetes.io/projected/b706cc39-6af6-4a91-b2a2-6160148dadae-kube-api-access-8t5t8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sds6v\" (UID: \"b706cc39-6af6-4a91-b2a2-6160148dadae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.325851 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.368642 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9"] Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.379227 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq"] Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.412372 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.457853 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.478742 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.712608 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.712678 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.713160 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.713276 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:29.713247236 +0000 UTC m=+908.408794555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.713179 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.713370 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:29.713344229 +0000 UTC m=+908.408891338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "metrics-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: I0130 08:24:28.919216 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.919578 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:28 crc kubenswrapper[4870]: E0130 08:24:28.919777 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert podName:46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:30.919752054 +0000 UTC m=+909.615299163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert") pod "infra-operator-controller-manager-79955696d6-spzcf" (UID: "46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.108097 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" event={"ID":"54c01287-d66d-46bc-bbb8-7532263099c5","Type":"ContainerStarted","Data":"8705bbe514d4febdd908e9cb1e80f09f3b9bcc97d67c37a69fbdadce948fae15"} Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.112703 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" event={"ID":"e973c5f3-3291-4d4b-85ce-806ef6f83c1a","Type":"ContainerStarted","Data":"8a04e10fbb8077fb4534634bfe13f1479349ee92c03526701e6408a74dfdd9ea"} Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.149970 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.157239 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.176804 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7"] Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.189750 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925313c0_6800_4a27_814b_887b46cf49ad.slice/crio-ca802d3170127a339a07e883131ed9b8367a04fddd9be1cc41a5c73076371779 WatchSource:0}: Error finding container ca802d3170127a339a07e883131ed9b8367a04fddd9be1cc41a5c73076371779: Status 404 returned error can't find the container with id ca802d3170127a339a07e883131ed9b8367a04fddd9be1cc41a5c73076371779 Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.193117 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96be73fb_f1fc_4c5c_a643_7b9dcc832ac6.slice/crio-82c40ea0caf56a142d720efdb33fcb2f16da6b40e6b95ca18f92f2e50b525477 WatchSource:0}: Error finding container 82c40ea0caf56a142d720efdb33fcb2f16da6b40e6b95ca18f92f2e50b525477: Status 404 returned error can't find the container with id 82c40ea0caf56a142d720efdb33fcb2f16da6b40e6b95ca18f92f2e50b525477 Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.198532 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5680ceb3_f5ec_4d9e_a313_13564402bff2.slice/crio-430ba50c6674f91e8d2de033cf5bdffa029096a44c6b3bbbed4448a2eef40fd6 WatchSource:0}: Error finding container 430ba50c6674f91e8d2de033cf5bdffa029096a44c6b3bbbed4448a2eef40fd6: Status 404 returned error can't find the container with id 430ba50c6674f91e8d2de033cf5bdffa029096a44c6b3bbbed4448a2eef40fd6 Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.213614 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.224719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.224973 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.225028 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert podName:be7a26e3-9284-4316-bce7-7bc15c9178bd nodeName:}" failed. No retries permitted until 2026-01-30 08:24:31.225009834 +0000 UTC m=+909.920556943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" (UID: "be7a26e3-9284-4316-bce7-7bc15c9178bd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.225396 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.234234 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.242070 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg"] Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.249464 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9449ead_e087_4895_a88a_8bdfe0835ebd.slice/crio-dbb168fdb23ded393de0c55f2ce0624be8483a0e7c9d08310feed200990f390d WatchSource:0}: Error finding container dbb168fdb23ded393de0c55f2ce0624be8483a0e7c9d08310feed200990f390d: Status 404 returned error can't find the container with id dbb168fdb23ded393de0c55f2ce0624be8483a0e7c9d08310feed200990f390d Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.254427 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh"] Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.270384 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604ff246_0f47_4c2c_8940_d76f10dce14e.slice/crio-efa1c9f6390852a8cb8c82ce7bb8f5364d8d5c898176f3124e9b3a5e46328118 WatchSource:0}: Error finding container efa1c9f6390852a8cb8c82ce7bb8f5364d8d5c898176f3124e9b3a5e46328118: Status 404 returned error can't find the container with id efa1c9f6390852a8cb8c82ce7bb8f5364d8d5c898176f3124e9b3a5e46328118 Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.272959 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-497sn"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.287198 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq"] Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.292176 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2q5dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-mx5xp_openstack-operators(274d3a56-3caf-4dd2-b122-e3b45a3eec6e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.293447 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdljv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-bmzrd_openstack-operators(0319ce7f-95ab-4abf-9101-bf436cc74bf4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.293672 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tldkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-t8ncr_openstack-operators(378c24d4-b8c1-4cd2-a85c-8449aa00ad3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.293773 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" podUID="274d3a56-3caf-4dd2-b122-e3b45a3eec6e" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.294589 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" podUID="0319ce7f-95ab-4abf-9101-bf436cc74bf4" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.294763 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" podUID="378c24d4-b8c1-4cd2-a85c-8449aa00ad3e" Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.296469 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9257db_1c02_4160_9c89_7df62f2ce602.slice/crio-b81a8e49e16a8195923ed1b679ffd27f372e0b1b5637d0796086f6d8f92cebbd WatchSource:0}: Error finding container b81a8e49e16a8195923ed1b679ffd27f372e0b1b5637d0796086f6d8f92cebbd: Status 404 returned error can't find the container with id b81a8e49e16a8195923ed1b679ffd27f372e0b1b5637d0796086f6d8f92cebbd Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.296585 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v65mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-cpn6f_openstack-operators(604ff246-0f47-4c2c-8940-d76f10dce14e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.296695 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6956410_92c0_40bf_b1c1_a3353ccf1bbc.slice/crio-bcba828776f5fc5a4406acc7be034578a26f7703ed3d183bfda40ac288adebdd WatchSource:0}: Error finding container bcba828776f5fc5a4406acc7be034578a26f7703ed3d183bfda40ac288adebdd: Status 404 returned error can't find the container with id bcba828776f5fc5a4406acc7be034578a26f7703ed3d183bfda40ac288adebdd Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.297684 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" podUID="604ff246-0f47-4c2c-8940-d76f10dce14e" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.299310 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgr5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-t4hbm_openstack-operators(ec9257db-1c02-4160-9c89-7df62f2ce602): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.300252 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn"] Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.300395 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" podUID="ec9257db-1c02-4160-9c89-7df62f2ce602" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.300922 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mc59p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7b7dd57594-2p68v_openstack-operators(d6956410-92c0-40bf-b1c1-a3353ccf1bbc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.302138 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" podUID="d6956410-92c0-40bf-b1c1-a3353ccf1bbc" Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.305902 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.310213 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.314713 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.320179 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.327488 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v"] Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.409090 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst"] Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.413535 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb7aeba5_92f5_4887_9a6a_92d8c57650d2.slice/crio-fc3a729f7ef0f0af0b616dd69825f76822f32f67675fdcbd46a004f703b9cd9e WatchSource:0}: Error finding container fc3a729f7ef0f0af0b616dd69825f76822f32f67675fdcbd46a004f703b9cd9e: Status 404 returned error can't find the container with id fc3a729f7ef0f0af0b616dd69825f76822f32f67675fdcbd46a004f703b9cd9e Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.413799 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v"] Jan 30 08:24:29 crc kubenswrapper[4870]: W0130 08:24:29.417630 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb706cc39_6af6_4a91_b2a2_6160148dadae.slice/crio-e899411b43da84d3c9e618f4eb95433909d455250c61a692462ce159867785e7 WatchSource:0}: Error finding container e899411b43da84d3c9e618f4eb95433909d455250c61a692462ce159867785e7: Status 404 returned error can't find the container with id e899411b43da84d3c9e618f4eb95433909d455250c61a692462ce159867785e7 Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.423019 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8t5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sds6v_openstack-operators(b706cc39-6af6-4a91-b2a2-6160148dadae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.424198 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" podUID="b706cc39-6af6-4a91-b2a2-6160148dadae" Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.735432 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:29 crc kubenswrapper[4870]: I0130 08:24:29.735482 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.735614 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.735666 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:31.735651395 +0000 UTC m=+910.431198504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "metrics-server-cert" not found Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.735713 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:24:29 crc kubenswrapper[4870]: E0130 08:24:29.735732 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:31.735726558 +0000 UTC m=+910.431273667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "webhook-server-cert" not found Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.127500 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" event={"ID":"2ee622d2-acd4-4eec-9fbb-12b5bae7e32f","Type":"ContainerStarted","Data":"5e966fb0d4d9db3f6a49c9436841c6e3dae3122a6b90541799ae68ad023c88ec"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.129415 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" event={"ID":"0319ce7f-95ab-4abf-9101-bf436cc74bf4","Type":"ContainerStarted","Data":"7566164dbbab2b8432786bcdc9c85380d0a409887d56688baf277e75721e2c55"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.131506 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" podUID="0319ce7f-95ab-4abf-9101-bf436cc74bf4" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.134585 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" event={"ID":"0ea209e2-96bf-4919-ad8f-f86de2b78ab1","Type":"ContainerStarted","Data":"7ec4abcee0d4012ce955c11273afea1f91c53942b41877f8442abee9bd67ed82"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.146041 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" event={"ID":"ea3efedd-cb74-48c7-b246-b188bac37ed4","Type":"ContainerStarted","Data":"8b682904f00174b2d5eaa8f91f6b0b3572ae203a2564c6cbe17dab8378368141"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.163943 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" event={"ID":"96be73fb-f1fc-4c5c-a643-7b9dcc832ac6","Type":"ContainerStarted","Data":"82c40ea0caf56a142d720efdb33fcb2f16da6b40e6b95ca18f92f2e50b525477"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.167205 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" event={"ID":"ec9257db-1c02-4160-9c89-7df62f2ce602","Type":"ContainerStarted","Data":"b81a8e49e16a8195923ed1b679ffd27f372e0b1b5637d0796086f6d8f92cebbd"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.169450 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" podUID="ec9257db-1c02-4160-9c89-7df62f2ce602" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.170281 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" event={"ID":"2de7363a-3627-42bb-a58f-7bad2e414192","Type":"ContainerStarted","Data":"13277e641351b1df4c23b359bf476808de89736339965a838a498c9846c83476"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.176956 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" event={"ID":"604ff246-0f47-4c2c-8940-d76f10dce14e","Type":"ContainerStarted","Data":"efa1c9f6390852a8cb8c82ce7bb8f5364d8d5c898176f3124e9b3a5e46328118"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.178463 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" podUID="604ff246-0f47-4c2c-8940-d76f10dce14e" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.178999 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" event={"ID":"d6956410-92c0-40bf-b1c1-a3353ccf1bbc","Type":"ContainerStarted","Data":"bcba828776f5fc5a4406acc7be034578a26f7703ed3d183bfda40ac288adebdd"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.189413 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" podUID="d6956410-92c0-40bf-b1c1-a3353ccf1bbc" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.190907 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" event={"ID":"378c24d4-b8c1-4cd2-a85c-8449aa00ad3e","Type":"ContainerStarted","Data":"4bc7a92459fbfcf89605cd042fa8fcf00f27466aa87b5d958647ba965095155c"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.193443 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" podUID="378c24d4-b8c1-4cd2-a85c-8449aa00ad3e" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.209859 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" event={"ID":"b9449ead-e087-4895-a88a-8bdfe0835ebd","Type":"ContainerStarted","Data":"dbb168fdb23ded393de0c55f2ce0624be8483a0e7c9d08310feed200990f390d"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.218022 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" event={"ID":"db7aeba5-92f5-4887-9a6a-92d8c57650d2","Type":"ContainerStarted","Data":"fc3a729f7ef0f0af0b616dd69825f76822f32f67675fdcbd46a004f703b9cd9e"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.221522 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" event={"ID":"274d3a56-3caf-4dd2-b122-e3b45a3eec6e","Type":"ContainerStarted","Data":"c310807e5e1c21bf87a891d4a2bbccb9034a44097c57374f0aa07b0c7b91d9b4"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.230841 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" podUID="274d3a56-3caf-4dd2-b122-e3b45a3eec6e" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.230846 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" event={"ID":"5cde6cc5-f427-4349-8c8a-3dce0deac5a9","Type":"ContainerStarted","Data":"63d0eaa47e604ad7f574bc6198cf9c5140d4f342d58b36801d5f5793073ef324"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.245830 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" event={"ID":"925313c0-6800-4a27-814b-887b46cf49ad","Type":"ContainerStarted","Data":"ca802d3170127a339a07e883131ed9b8367a04fddd9be1cc41a5c73076371779"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.250971 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" event={"ID":"5680ceb3-f5ec-4d9e-a313-13564402bff2","Type":"ContainerStarted","Data":"430ba50c6674f91e8d2de033cf5bdffa029096a44c6b3bbbed4448a2eef40fd6"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.259569 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" event={"ID":"dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb","Type":"ContainerStarted","Data":"791f88e2d4e65862eed61d7c949be76a3cfad83eb03322249e0945038c994e83"} Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.261558 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" event={"ID":"b706cc39-6af6-4a91-b2a2-6160148dadae","Type":"ContainerStarted","Data":"e899411b43da84d3c9e618f4eb95433909d455250c61a692462ce159867785e7"} Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.264215 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" podUID="b706cc39-6af6-4a91-b2a2-6160148dadae" Jan 30 08:24:30 crc kubenswrapper[4870]: I0130 08:24:30.958622 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.958785 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:30 crc kubenswrapper[4870]: E0130 08:24:30.958847 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert podName:46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:34.958825984 +0000 UTC m=+913.654373093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert") pod "infra-operator-controller-manager-79955696d6-spzcf" (UID: "46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:31 crc kubenswrapper[4870]: I0130 08:24:31.263556 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.263758 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.263861 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert podName:be7a26e3-9284-4316-bce7-7bc15c9178bd nodeName:}" failed. No retries permitted until 2026-01-30 08:24:35.263827686 +0000 UTC m=+913.959374795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" (UID: "be7a26e3-9284-4316-bce7-7bc15c9178bd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.277455 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" podUID="0319ce7f-95ab-4abf-9101-bf436cc74bf4" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.277498 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" podUID="604ff246-0f47-4c2c-8940-d76f10dce14e" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.277732 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" podUID="274d3a56-3caf-4dd2-b122-e3b45a3eec6e" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.277811 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" podUID="ec9257db-1c02-4160-9c89-7df62f2ce602" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.277856 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" podUID="378c24d4-b8c1-4cd2-a85c-8449aa00ad3e" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.277916 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" podUID="d6956410-92c0-40bf-b1c1-a3353ccf1bbc" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.283598 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" podUID="b706cc39-6af6-4a91-b2a2-6160148dadae" Jan 30 08:24:31 crc kubenswrapper[4870]: I0130 08:24:31.781787 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:31 crc kubenswrapper[4870]: I0130 08:24:31.782064 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.782000 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.782155 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:35.782137038 +0000 UTC m=+914.477684147 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "webhook-server-cert" not found Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.782349 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:24:31 crc kubenswrapper[4870]: E0130 08:24:31.782444 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:35.782415257 +0000 UTC m=+914.477962426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "metrics-server-cert" not found Jan 30 08:24:34 crc kubenswrapper[4870]: I0130 08:24:34.961114 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:34 crc kubenswrapper[4870]: E0130 08:24:34.962465 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:34 crc kubenswrapper[4870]: E0130 08:24:34.962559 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert podName:46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:42.962527344 +0000 UTC m=+921.658074453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert") pod "infra-operator-controller-manager-79955696d6-spzcf" (UID: "46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:24:35 crc kubenswrapper[4870]: I0130 08:24:35.269443 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:35 crc kubenswrapper[4870]: E0130 08:24:35.269678 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:35 crc kubenswrapper[4870]: E0130 08:24:35.269722 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert podName:be7a26e3-9284-4316-bce7-7bc15c9178bd nodeName:}" failed. No retries permitted until 2026-01-30 08:24:43.269709275 +0000 UTC m=+921.965256384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" (UID: "be7a26e3-9284-4316-bce7-7bc15c9178bd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:24:35 crc kubenswrapper[4870]: I0130 08:24:35.879202 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:35 crc kubenswrapper[4870]: I0130 08:24:35.879284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:35 crc kubenswrapper[4870]: E0130 08:24:35.879487 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:24:35 crc kubenswrapper[4870]: E0130 08:24:35.879555 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:43.879533003 +0000 UTC m=+922.575080152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "metrics-server-cert" not found Jan 30 08:24:35 crc kubenswrapper[4870]: E0130 08:24:35.880193 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:24:35 crc kubenswrapper[4870]: E0130 08:24:35.880274 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:43.880255206 +0000 UTC m=+922.575802315 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "webhook-server-cert" not found Jan 30 08:24:40 crc kubenswrapper[4870]: E0130 08:24:40.904139 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 30 08:24:40 crc kubenswrapper[4870]: E0130 08:24:40.906495 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgrl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-hbmf7_openstack-operators(925313c0-6800-4a27-814b-887b46cf49ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:24:40 crc kubenswrapper[4870]: E0130 08:24:40.909032 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" podUID="925313c0-6800-4a27-814b-887b46cf49ad" Jan 30 08:24:41 crc kubenswrapper[4870]: E0130 08:24:41.350426 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" podUID="925313c0-6800-4a27-814b-887b46cf49ad" Jan 30 08:24:41 crc kubenswrapper[4870]: E0130 08:24:41.610230 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 30 08:24:41 crc kubenswrapper[4870]: E0130 08:24:41.610423 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hw8fb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-5vfrj_openstack-operators(5680ceb3-f5ec-4d9e-a313-13564402bff2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:24:41 crc kubenswrapper[4870]: E0130 08:24:41.611741 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" podUID="5680ceb3-f5ec-4d9e-a313-13564402bff2" Jan 30 08:24:42 crc kubenswrapper[4870]: E0130 08:24:42.359495 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" podUID="5680ceb3-f5ec-4d9e-a313-13564402bff2" Jan 30 08:24:42 crc kubenswrapper[4870]: I0130 08:24:42.997408 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.009062 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05-cert\") pod \"infra-operator-controller-manager-79955696d6-spzcf\" (UID: \"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.013075 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.311109 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.317221 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be7a26e3-9284-4316-bce7-7bc15c9178bd-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8\" (UID: \"be7a26e3-9284-4316-bce7-7bc15c9178bd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.502277 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" event={"ID":"96be73fb-f1fc-4c5c-a643-7b9dcc832ac6","Type":"ContainerStarted","Data":"302f46534efc0fda5e49b4b3278427082e37cfa32b57b945fc313de4d8c47308"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.504328 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.527868 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" event={"ID":"b9449ead-e087-4895-a88a-8bdfe0835ebd","Type":"ContainerStarted","Data":"443f92cafe17d3910cda94dced8968db9fb003f51da9a07a954f5a3c1e55275b"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.528441 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.556123 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.558344 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-spzcf"] Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.560211 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" event={"ID":"0ea209e2-96bf-4919-ad8f-f86de2b78ab1","Type":"ContainerStarted","Data":"0cefd9143f119d915c5d94a3d3fe169e9f64975a40722147c1788c569df20f20"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.562323 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.575646 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" event={"ID":"dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb","Type":"ContainerStarted","Data":"21728291b78bccdf9e1cf02baa6b13e5ba2f0048f1783ece933c55b7e2b671e8"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.576915 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.594293 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" podStartSLOduration=4.5183341630000005 podStartE2EDuration="17.594278009s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.198603984 +0000 UTC m=+907.894151093" lastFinishedPulling="2026-01-30 08:24:42.27454783 +0000 UTC m=+920.970094939" observedRunningTime="2026-01-30 08:24:43.591609925 +0000 UTC m=+922.287157034" watchObservedRunningTime="2026-01-30 08:24:43.594278009 +0000 UTC m=+922.289825118" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.606145 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" event={"ID":"ea3efedd-cb74-48c7-b246-b188bac37ed4","Type":"ContainerStarted","Data":"f74db5cd71c9fa468a550c1a71b8858ac93d619f5fc9aef5454f80dde5103d39"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.606941 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.655149 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" event={"ID":"5cde6cc5-f427-4349-8c8a-3dce0deac5a9","Type":"ContainerStarted","Data":"0a7ee0d2b5f000d0a431c934ca253da22aa50f69dd25e09135b5e3388d5508bf"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.656448 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.687376 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" event={"ID":"2de7363a-3627-42bb-a58f-7bad2e414192","Type":"ContainerStarted","Data":"21bf925c73efb6176960423e3fdb3a3dffee536189cff78b10416c323cdc1a23"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.688327 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.708774 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" podStartSLOduration=4.647242642 podStartE2EDuration="17.708749975s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.213093389 +0000 UTC m=+907.908640498" lastFinishedPulling="2026-01-30 08:24:42.274600712 +0000 UTC m=+920.970147831" observedRunningTime="2026-01-30 08:24:43.707139625 +0000 UTC m=+922.402686734" watchObservedRunningTime="2026-01-30 08:24:43.708749975 +0000 UTC m=+922.404297084" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.711420 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" podStartSLOduration=4.690552554 podStartE2EDuration="17.711412179s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.253781738 +0000 UTC m=+907.949328847" lastFinishedPulling="2026-01-30 08:24:42.274641323 +0000 UTC m=+920.970188472" observedRunningTime="2026-01-30 08:24:43.65667576 +0000 UTC m=+922.352222869" watchObservedRunningTime="2026-01-30 08:24:43.711412179 +0000 UTC m=+922.406959288" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.719189 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" event={"ID":"db7aeba5-92f5-4887-9a6a-92d8c57650d2","Type":"ContainerStarted","Data":"46c705cc2a19f3d3043d9480b4eb1564d5f732cff2cd7bbfa6f1a8fed8972571"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.719237 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.730077 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" event={"ID":"54c01287-d66d-46bc-bbb8-7532263099c5","Type":"ContainerStarted","Data":"5eb0dd4b48f0c8ec718f0a8040f2d3be356550119225643713a48899a64c778b"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.730492 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.745745 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" event={"ID":"e973c5f3-3291-4d4b-85ce-806ef6f83c1a","Type":"ContainerStarted","Data":"b9012e1f5cd8f7cf422d1d64797178731df193bb4b5da53ea6c7480a02f4188e"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.746559 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.747264 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" podStartSLOduration=4.757318383 podStartE2EDuration="17.747253056s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.284307357 +0000 UTC m=+907.979854456" lastFinishedPulling="2026-01-30 08:24:42.27424202 +0000 UTC m=+920.969789129" observedRunningTime="2026-01-30 08:24:43.737266501 +0000 UTC m=+922.432813610" watchObservedRunningTime="2026-01-30 08:24:43.747253056 +0000 UTC m=+922.442800165" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.764115 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" event={"ID":"2ee622d2-acd4-4eec-9fbb-12b5bae7e32f","Type":"ContainerStarted","Data":"98939eb02384bd35f3d01780dc85c078ff3d2a6c6ff0cd99f8a632ace96c7325"} Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.765083 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.779644 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" podStartSLOduration=4.814153528 podStartE2EDuration="17.779616192s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.415910921 +0000 UTC m=+908.111458030" lastFinishedPulling="2026-01-30 08:24:42.381373545 +0000 UTC m=+921.076920694" observedRunningTime="2026-01-30 08:24:43.778370972 +0000 UTC m=+922.473918081" watchObservedRunningTime="2026-01-30 08:24:43.779616192 +0000 UTC m=+922.475163301" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.812718 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" podStartSLOduration=3.802834061 podStartE2EDuration="16.812693531s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.264836555 +0000 UTC m=+907.960383664" lastFinishedPulling="2026-01-30 08:24:42.274696025 +0000 UTC m=+920.970243134" observedRunningTime="2026-01-30 08:24:43.805563467 +0000 UTC m=+922.501110576" watchObservedRunningTime="2026-01-30 08:24:43.812693531 +0000 UTC m=+922.508240640" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.866831 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" podStartSLOduration=4.807320892 podStartE2EDuration="17.866815071s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.215005189 +0000 UTC m=+907.910552298" lastFinishedPulling="2026-01-30 08:24:42.274499368 +0000 UTC m=+920.970046477" observedRunningTime="2026-01-30 08:24:43.841209847 +0000 UTC m=+922.536756956" watchObservedRunningTime="2026-01-30 08:24:43.866815071 +0000 UTC m=+922.562362180" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.868974 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" podStartSLOduration=4.798588758 podStartE2EDuration="17.868964019s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.265237007 +0000 UTC m=+907.960784116" lastFinishedPulling="2026-01-30 08:24:42.335612228 +0000 UTC m=+921.031159377" observedRunningTime="2026-01-30 08:24:43.864105486 +0000 UTC m=+922.559652595" watchObservedRunningTime="2026-01-30 08:24:43.868964019 +0000 UTC m=+922.564511128" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.897635 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" podStartSLOduration=4.1018523 podStartE2EDuration="17.897619179s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:28.478514092 +0000 UTC m=+907.174061201" lastFinishedPulling="2026-01-30 08:24:42.274280971 +0000 UTC m=+920.969828080" observedRunningTime="2026-01-30 08:24:43.893507129 +0000 UTC m=+922.589054248" watchObservedRunningTime="2026-01-30 08:24:43.897619179 +0000 UTC m=+922.593166288" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.936173 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.936227 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:43 crc kubenswrapper[4870]: E0130 08:24:43.942643 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:24:43 crc kubenswrapper[4870]: E0130 08:24:43.942703 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs podName:fcdb20a3-7229-48e6-8f12-d1b6a5c892f3 nodeName:}" failed. No retries permitted until 2026-01-30 08:24:59.942686565 +0000 UTC m=+938.638233674 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs") pod "openstack-operator-controller-manager-65544cf747-sgxjd" (UID: "fcdb20a3-7229-48e6-8f12-d1b6a5c892f3") : secret "webhook-server-cert" not found Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.943644 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-metrics-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:24:43 crc kubenswrapper[4870]: I0130 08:24:43.955236 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" podStartSLOduration=4.1598661 podStartE2EDuration="17.955196517s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:28.478963655 +0000 UTC m=+907.174510764" lastFinishedPulling="2026-01-30 08:24:42.274294072 +0000 UTC m=+920.969841181" observedRunningTime="2026-01-30 08:24:43.922440749 +0000 UTC m=+922.617987868" watchObservedRunningTime="2026-01-30 08:24:43.955196517 +0000 UTC m=+922.650743626" Jan 30 08:24:44 crc kubenswrapper[4870]: I0130 08:24:44.139254 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" podStartSLOduration=4.115151483 podStartE2EDuration="17.13923777s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.250116662 +0000 UTC m=+907.945663771" lastFinishedPulling="2026-01-30 08:24:42.274202939 +0000 UTC m=+920.969750058" observedRunningTime="2026-01-30 08:24:43.962396224 +0000 UTC m=+922.657943333" watchObservedRunningTime="2026-01-30 08:24:44.13923777 +0000 UTC m=+922.834784879" Jan 30 08:24:44 crc kubenswrapper[4870]: I0130 08:24:44.144939 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8"] Jan 30 08:24:44 crc kubenswrapper[4870]: W0130 08:24:44.155861 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe7a26e3_9284_4316_bce7_7bc15c9178bd.slice/crio-fac987e50460bd25b1ae104eb4a6f63c98f74cefb841dc670a1788504ce60fba WatchSource:0}: Error finding container fac987e50460bd25b1ae104eb4a6f63c98f74cefb841dc670a1788504ce60fba: Status 404 returned error can't find the container with id fac987e50460bd25b1ae104eb4a6f63c98f74cefb841dc670a1788504ce60fba Jan 30 08:24:44 crc kubenswrapper[4870]: I0130 08:24:44.788340 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" event={"ID":"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05","Type":"ContainerStarted","Data":"027bbda16f8aafae2cbe7d992287e32c0f034c4bb79913d28b86fd7dde0f3cea"} Jan 30 08:24:44 crc kubenswrapper[4870]: I0130 08:24:44.801039 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" event={"ID":"be7a26e3-9284-4316-bce7-7bc15c9178bd","Type":"ContainerStarted","Data":"fac987e50460bd25b1ae104eb4a6f63c98f74cefb841dc670a1788504ce60fba"} Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.111991 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-hsfpq" Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.167719 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-grbz8" Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.174251 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tkrpg" Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.484469 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rhfst" Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.570320 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-59rt2" Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.603924 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-j9bdn" Jan 30 08:24:47 crc kubenswrapper[4870]: I0130 08:24:47.616432 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2xdfh" Jan 30 08:24:48 crc kubenswrapper[4870]: I0130 08:24:48.131646 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-497sn" Jan 30 08:24:55 crc kubenswrapper[4870]: I0130 08:24:55.249430 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:24:55 crc kubenswrapper[4870]: I0130 08:24:55.250343 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:24:57 crc kubenswrapper[4870]: I0130 08:24:57.101974 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wfpg9" Jan 30 08:24:57 crc kubenswrapper[4870]: I0130 08:24:57.249718 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wlkxq" Jan 30 08:24:57 crc kubenswrapper[4870]: I0130 08:24:57.800559 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4sftq" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.170444 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l8t5h"] Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.171762 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.184343 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8t5h"] Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.313120 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-utilities\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.313352 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99jp\" (UniqueName: \"kubernetes.io/projected/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-kube-api-access-q99jp\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.313624 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-catalog-content\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.415043 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99jp\" (UniqueName: \"kubernetes.io/projected/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-kube-api-access-q99jp\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.415156 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-catalog-content\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.415220 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-utilities\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.415767 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-utilities\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.416396 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-catalog-content\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.439777 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99jp\" (UniqueName: \"kubernetes.io/projected/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-kube-api-access-q99jp\") pod \"redhat-marketplace-l8t5h\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:24:59 crc kubenswrapper[4870]: I0130 08:24:59.497144 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:00 crc kubenswrapper[4870]: I0130 08:25:00.023190 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:25:00 crc kubenswrapper[4870]: I0130 08:25:00.029355 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fcdb20a3-7229-48e6-8f12-d1b6a5c892f3-webhook-certs\") pod \"openstack-operator-controller-manager-65544cf747-sgxjd\" (UID: \"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3\") " pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:25:00 crc kubenswrapper[4870]: I0130 08:25:00.245369 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:25:05 crc kubenswrapper[4870]: E0130 08:25:05.228907 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e" Jan 30 08:25:05 crc kubenswrapper[4870]: E0130 08:25:05.229974 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e" Jan 30 08:25:05 crc kubenswrapper[4870]: E0130 08:25:05.230656 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mc59p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7b7dd57594-2p68v_openstack-operators(d6956410-92c0-40bf-b1c1-a3353ccf1bbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:25:05 crc kubenswrapper[4870]: E0130 08:25:05.231892 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" podUID="d6956410-92c0-40bf-b1c1-a3353ccf1bbc" Jan 30 08:25:06 crc kubenswrapper[4870]: E0130 08:25:06.026489 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 08:25:06 crc kubenswrapper[4870]: E0130 08:25:06.026822 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8t5t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sds6v_openstack-operators(b706cc39-6af6-4a91-b2a2-6160148dadae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:25:06 crc kubenswrapper[4870]: E0130 08:25:06.029094 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" podUID="b706cc39-6af6-4a91-b2a2-6160148dadae" Jan 30 08:25:06 crc kubenswrapper[4870]: I0130 08:25:06.592820 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8t5h"] Jan 30 08:25:06 crc kubenswrapper[4870]: W0130 08:25:06.593653 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa WatchSource:0}: Error finding container 424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa: Status 404 returned error can't find the container with id 424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa Jan 30 08:25:06 crc kubenswrapper[4870]: I0130 08:25:06.654800 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd"] Jan 30 08:25:06 crc kubenswrapper[4870]: W0130 08:25:06.664832 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcdb20a3_7229_48e6_8f12_d1b6a5c892f3.slice/crio-b7388ad560dc0f84a65d0db103d1f861e42a83bf603f3f58c3147cd10324a7ed WatchSource:0}: Error finding container b7388ad560dc0f84a65d0db103d1f861e42a83bf603f3f58c3147cd10324a7ed: Status 404 returned error can't find the container with id b7388ad560dc0f84a65d0db103d1f861e42a83bf603f3f58c3147cd10324a7ed Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.082384 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" event={"ID":"ec9257db-1c02-4160-9c89-7df62f2ce602","Type":"ContainerStarted","Data":"598d8e0fbfa60e302ba1d679c6fd723141d602598c2deee6fd9f2aa0231d4429"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.083209 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.084399 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" event={"ID":"604ff246-0f47-4c2c-8940-d76f10dce14e","Type":"ContainerStarted","Data":"b8f6e228a17aac34e5cc8941d1a4b4ab4e8038bf3d42af6eda2e1be33010cbf8"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.084782 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.085568 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" event={"ID":"5680ceb3-f5ec-4d9e-a313-13564402bff2","Type":"ContainerStarted","Data":"ac2e2091012196765bb594492445d2d5c11ca198f17d2cd3f876f5046d2331b0"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.086041 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.088206 4870 generic.go:334] "Generic (PLEG): container finished" podID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerID="3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f" exitCode=0 Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.088367 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8t5h" event={"ID":"f2ebdb93-c8ce-45c1-b10f-037853cc99d9","Type":"ContainerDied","Data":"3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.088965 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8t5h" event={"ID":"f2ebdb93-c8ce-45c1-b10f-037853cc99d9","Type":"ContainerStarted","Data":"424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.094026 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" event={"ID":"274d3a56-3caf-4dd2-b122-e3b45a3eec6e","Type":"ContainerStarted","Data":"a20b2399deab0ca006f954d363d3c523d2c4eb3c8b08c0093d9f574dfb14ed99"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.094295 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.098127 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" event={"ID":"be7a26e3-9284-4316-bce7-7bc15c9178bd","Type":"ContainerStarted","Data":"12ce709e6f07eb4baf6bc7c9d5f6268879da8c79f958c70510442e10096acd70"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.099029 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.104155 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" event={"ID":"925313c0-6800-4a27-814b-887b46cf49ad","Type":"ContainerStarted","Data":"3ec03c81bc3012aae78444827de3e3fb7bfa7f82bb77e8175a041a07e79f6eea"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.104397 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.105667 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" event={"ID":"378c24d4-b8c1-4cd2-a85c-8449aa00ad3e","Type":"ContainerStarted","Data":"9436a32bd8ad9b0e7f50cc8c9adeca28d70eba9075b08895e63f97d61c9c40ea"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.105808 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.107545 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" event={"ID":"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3","Type":"ContainerStarted","Data":"af1ea79b9c2d897c86152d93233c485038bf92e180307cc4bc3dd3ecf4428494"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.107576 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" event={"ID":"fcdb20a3-7229-48e6-8f12-d1b6a5c892f3","Type":"ContainerStarted","Data":"b7388ad560dc0f84a65d0db103d1f861e42a83bf603f3f58c3147cd10324a7ed"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.107697 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.108929 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" event={"ID":"0319ce7f-95ab-4abf-9101-bf436cc74bf4","Type":"ContainerStarted","Data":"462af9c4d0b02aafa48258f02270d794a1bbe41eb7b67e1441f92558cd4cc074"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.109284 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.111399 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" event={"ID":"46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05","Type":"ContainerStarted","Data":"01d7492f410f9e9fd7af41951b5d1897a037791a5dc63ee5379feb50748737a7"} Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.111662 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.124577 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" podStartSLOduration=7.176622634 podStartE2EDuration="40.124550699s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.299182313 +0000 UTC m=+907.994729422" lastFinishedPulling="2026-01-30 08:25:02.247110338 +0000 UTC m=+940.942657487" observedRunningTime="2026-01-30 08:25:07.114256576 +0000 UTC m=+945.809803685" watchObservedRunningTime="2026-01-30 08:25:07.124550699 +0000 UTC m=+945.820097808" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.146013 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" podStartSLOduration=9.1328392 podStartE2EDuration="40.145998443s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.293583937 +0000 UTC m=+907.989131046" lastFinishedPulling="2026-01-30 08:25:00.30674317 +0000 UTC m=+939.002290289" observedRunningTime="2026-01-30 08:25:07.139250141 +0000 UTC m=+945.834797250" watchObservedRunningTime="2026-01-30 08:25:07.145998443 +0000 UTC m=+945.841545552" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.174812 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" podStartSLOduration=4.359800194 podStartE2EDuration="41.174793427s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.213026557 +0000 UTC m=+907.908573666" lastFinishedPulling="2026-01-30 08:25:06.0280198 +0000 UTC m=+944.723566899" observedRunningTime="2026-01-30 08:25:07.173613381 +0000 UTC m=+945.869160490" watchObservedRunningTime="2026-01-30 08:25:07.174793427 +0000 UTC m=+945.870340536" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.217820 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" podStartSLOduration=4.3635791919999996 podStartE2EDuration="41.217802298s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.191687817 +0000 UTC m=+907.887234926" lastFinishedPulling="2026-01-30 08:25:06.045910913 +0000 UTC m=+944.741458032" observedRunningTime="2026-01-30 08:25:07.21660062 +0000 UTC m=+945.912147729" watchObservedRunningTime="2026-01-30 08:25:07.217802298 +0000 UTC m=+945.913349407" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.296066 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" podStartSLOduration=19.258309283 podStartE2EDuration="40.296040796s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:44.166455915 +0000 UTC m=+922.862003024" lastFinishedPulling="2026-01-30 08:25:05.204187398 +0000 UTC m=+943.899734537" observedRunningTime="2026-01-30 08:25:07.295267102 +0000 UTC m=+945.990814211" watchObservedRunningTime="2026-01-30 08:25:07.296040796 +0000 UTC m=+945.991587905" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.391545 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" podStartSLOduration=40.391517166 podStartE2EDuration="40.391517166s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:25:07.378948241 +0000 UTC m=+946.074495350" watchObservedRunningTime="2026-01-30 08:25:07.391517166 +0000 UTC m=+946.087064275" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.476138 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" podStartSLOduration=3.770074524 podStartE2EDuration="40.476105894s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.293292228 +0000 UTC m=+907.988839337" lastFinishedPulling="2026-01-30 08:25:05.999323558 +0000 UTC m=+944.694870707" observedRunningTime="2026-01-30 08:25:07.472764349 +0000 UTC m=+946.168311458" watchObservedRunningTime="2026-01-30 08:25:07.476105894 +0000 UTC m=+946.171652993" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.511704 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" podStartSLOduration=3.79670409 podStartE2EDuration="40.511686751s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.292008068 +0000 UTC m=+907.987555167" lastFinishedPulling="2026-01-30 08:25:06.006990689 +0000 UTC m=+944.702537828" observedRunningTime="2026-01-30 08:25:07.509950227 +0000 UTC m=+946.205497336" watchObservedRunningTime="2026-01-30 08:25:07.511686751 +0000 UTC m=+946.207233860" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.542359 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" podStartSLOduration=5.6352795449999995 podStartE2EDuration="41.542343284s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.296514499 +0000 UTC m=+907.992061609" lastFinishedPulling="2026-01-30 08:25:05.203578229 +0000 UTC m=+943.899125348" observedRunningTime="2026-01-30 08:25:07.540564089 +0000 UTC m=+946.236111198" watchObservedRunningTime="2026-01-30 08:25:07.542343284 +0000 UTC m=+946.237890393" Jan 30 08:25:07 crc kubenswrapper[4870]: I0130 08:25:07.575055 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" podStartSLOduration=20.067128576 podStartE2EDuration="41.575031151s" podCreationTimestamp="2026-01-30 08:24:26 +0000 UTC" firstStartedPulling="2026-01-30 08:24:43.590995466 +0000 UTC m=+922.286542575" lastFinishedPulling="2026-01-30 08:25:05.098898021 +0000 UTC m=+943.794445150" observedRunningTime="2026-01-30 08:25:07.56927759 +0000 UTC m=+946.264824699" watchObservedRunningTime="2026-01-30 08:25:07.575031151 +0000 UTC m=+946.270578260" Jan 30 08:25:08 crc kubenswrapper[4870]: I0130 08:25:08.126397 4870 generic.go:334] "Generic (PLEG): container finished" podID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerID="ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9" exitCode=0 Jan 30 08:25:08 crc kubenswrapper[4870]: I0130 08:25:08.127353 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8t5h" event={"ID":"f2ebdb93-c8ce-45c1-b10f-037853cc99d9","Type":"ContainerDied","Data":"ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9"} Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.139043 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8t5h" event={"ID":"f2ebdb93-c8ce-45c1-b10f-037853cc99d9","Type":"ContainerStarted","Data":"1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8"} Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.497953 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.498011 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.821608 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l8t5h" podStartSLOduration=9.287521785 podStartE2EDuration="10.821578029s" podCreationTimestamp="2026-01-30 08:24:59 +0000 UTC" firstStartedPulling="2026-01-30 08:25:07.089542149 +0000 UTC m=+945.785089258" lastFinishedPulling="2026-01-30 08:25:08.623598383 +0000 UTC m=+947.319145502" observedRunningTime="2026-01-30 08:25:09.168335976 +0000 UTC m=+947.863883085" watchObservedRunningTime="2026-01-30 08:25:09.821578029 +0000 UTC m=+948.517125178" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.828944 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4nh7"] Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.830766 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.845557 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4nh7"] Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.887329 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7z55\" (UniqueName: \"kubernetes.io/projected/d7766425-f469-4513-b62b-e44e3d3f81bc-kube-api-access-x7z55\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.887447 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-catalog-content\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.887491 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-utilities\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.988760 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-catalog-content\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.988856 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-utilities\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.988996 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7z55\" (UniqueName: \"kubernetes.io/projected/d7766425-f469-4513-b62b-e44e3d3f81bc-kube-api-access-x7z55\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.989445 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-catalog-content\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:09 crc kubenswrapper[4870]: I0130 08:25:09.989976 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-utilities\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:10 crc kubenswrapper[4870]: I0130 08:25:10.020820 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7z55\" (UniqueName: \"kubernetes.io/projected/d7766425-f469-4513-b62b-e44e3d3f81bc-kube-api-access-x7z55\") pod \"certified-operators-t4nh7\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:10 crc kubenswrapper[4870]: I0130 08:25:10.152176 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:10 crc kubenswrapper[4870]: I0130 08:25:10.474083 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4nh7"] Jan 30 08:25:10 crc kubenswrapper[4870]: W0130 08:25:10.477744 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7766425_f469_4513_b62b_e44e3d3f81bc.slice/crio-bf15e36614073a765c12f2a92a73edc04b67fdb7bd97803fa465e4ec3897310b WatchSource:0}: Error finding container bf15e36614073a765c12f2a92a73edc04b67fdb7bd97803fa465e4ec3897310b: Status 404 returned error can't find the container with id bf15e36614073a765c12f2a92a73edc04b67fdb7bd97803fa465e4ec3897310b Jan 30 08:25:10 crc kubenswrapper[4870]: I0130 08:25:10.548655 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-l8t5h" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="registry-server" probeResult="failure" output=< Jan 30 08:25:10 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:25:10 crc kubenswrapper[4870]: > Jan 30 08:25:11 crc kubenswrapper[4870]: I0130 08:25:11.155540 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerDied","Data":"ced22f80742d52f01c7a2a77934c1cc85ecaeb0560f0d90e44530ff08ba4d9a1"} Jan 30 08:25:11 crc kubenswrapper[4870]: I0130 08:25:11.155218 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerID="ced22f80742d52f01c7a2a77934c1cc85ecaeb0560f0d90e44530ff08ba4d9a1" exitCode=0 Jan 30 08:25:11 crc kubenswrapper[4870]: I0130 08:25:11.156255 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerStarted","Data":"bf15e36614073a765c12f2a92a73edc04b67fdb7bd97803fa465e4ec3897310b"} Jan 30 08:25:12 crc kubenswrapper[4870]: I0130 08:25:12.173868 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerStarted","Data":"59dd3f065b079b1913a4f498086fdcee013f59a43677085570cb3e003e1cb9ad"} Jan 30 08:25:13 crc kubenswrapper[4870]: I0130 08:25:13.021526 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-spzcf" Jan 30 08:25:13 crc kubenswrapper[4870]: I0130 08:25:13.184209 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerID="59dd3f065b079b1913a4f498086fdcee013f59a43677085570cb3e003e1cb9ad" exitCode=0 Jan 30 08:25:13 crc kubenswrapper[4870]: I0130 08:25:13.184272 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerDied","Data":"59dd3f065b079b1913a4f498086fdcee013f59a43677085570cb3e003e1cb9ad"} Jan 30 08:25:13 crc kubenswrapper[4870]: I0130 08:25:13.571844 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8" Jan 30 08:25:14 crc kubenswrapper[4870]: I0130 08:25:14.208093 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerStarted","Data":"717be5686005143024a4b1fb0b55c3b441fc39613b640266654be36845057b0c"} Jan 30 08:25:14 crc kubenswrapper[4870]: I0130 08:25:14.238367 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4nh7" podStartSLOduration=2.632265974 podStartE2EDuration="5.238349857s" podCreationTimestamp="2026-01-30 08:25:09 +0000 UTC" firstStartedPulling="2026-01-30 08:25:11.157444577 +0000 UTC m=+949.852991716" lastFinishedPulling="2026-01-30 08:25:13.76352849 +0000 UTC m=+952.459075599" observedRunningTime="2026-01-30 08:25:14.228458987 +0000 UTC m=+952.924006106" watchObservedRunningTime="2026-01-30 08:25:14.238349857 +0000 UTC m=+952.933896966" Jan 30 08:25:17 crc kubenswrapper[4870]: I0130 08:25:17.314537 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hbmf7" Jan 30 08:25:17 crc kubenswrapper[4870]: I0130 08:25:17.440165 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-5vfrj" Jan 30 08:25:17 crc kubenswrapper[4870]: I0130 08:25:17.723066 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cpn6f" Jan 30 08:25:18 crc kubenswrapper[4870]: I0130 08:25:18.012713 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-bmzrd" Jan 30 08:25:18 crc kubenswrapper[4870]: I0130 08:25:18.054667 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t4hbm" Jan 30 08:25:18 crc kubenswrapper[4870]: I0130 08:25:18.269486 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mx5xp" Jan 30 08:25:18 crc kubenswrapper[4870]: I0130 08:25:18.328039 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-t8ncr" Jan 30 08:25:19 crc kubenswrapper[4870]: E0130 08:25:19.077226 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" podUID="b706cc39-6af6-4a91-b2a2-6160148dadae" Jan 30 08:25:19 crc kubenswrapper[4870]: E0130 08:25:19.077290 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/openstack-k8s-operators/watcher-operator:3bb7e7b472cb523c5e84dcb1d8dd4ef08e04be5e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" podUID="d6956410-92c0-40bf-b1c1-a3353ccf1bbc" Jan 30 08:25:19 crc kubenswrapper[4870]: I0130 08:25:19.581517 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:19 crc kubenswrapper[4870]: I0130 08:25:19.650663 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:19 crc kubenswrapper[4870]: I0130 08:25:19.820418 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8t5h"] Jan 30 08:25:20 crc kubenswrapper[4870]: I0130 08:25:20.152675 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:20 crc kubenswrapper[4870]: I0130 08:25:20.152775 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:20 crc kubenswrapper[4870]: I0130 08:25:20.209135 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:20 crc kubenswrapper[4870]: I0130 08:25:20.255811 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65544cf747-sgxjd" Jan 30 08:25:20 crc kubenswrapper[4870]: I0130 08:25:20.335237 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.282496 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l8t5h" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="registry-server" containerID="cri-o://1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8" gracePeriod=2 Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.800159 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.935281 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99jp\" (UniqueName: \"kubernetes.io/projected/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-kube-api-access-q99jp\") pod \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.935372 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-catalog-content\") pod \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.935454 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-utilities\") pod \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\" (UID: \"f2ebdb93-c8ce-45c1-b10f-037853cc99d9\") " Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.936454 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-utilities" (OuterVolumeSpecName: "utilities") pod "f2ebdb93-c8ce-45c1-b10f-037853cc99d9" (UID: "f2ebdb93-c8ce-45c1-b10f-037853cc99d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.940765 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-kube-api-access-q99jp" (OuterVolumeSpecName: "kube-api-access-q99jp") pod "f2ebdb93-c8ce-45c1-b10f-037853cc99d9" (UID: "f2ebdb93-c8ce-45c1-b10f-037853cc99d9"). InnerVolumeSpecName "kube-api-access-q99jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:25:21 crc kubenswrapper[4870]: I0130 08:25:21.966906 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2ebdb93-c8ce-45c1-b10f-037853cc99d9" (UID: "f2ebdb93-c8ce-45c1-b10f-037853cc99d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.038021 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.038069 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.038081 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99jp\" (UniqueName: \"kubernetes.io/projected/f2ebdb93-c8ce-45c1-b10f-037853cc99d9-kube-api-access-q99jp\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.295362 4870 generic.go:334] "Generic (PLEG): container finished" podID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerID="1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8" exitCode=0 Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.295458 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8t5h" event={"ID":"f2ebdb93-c8ce-45c1-b10f-037853cc99d9","Type":"ContainerDied","Data":"1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8"} Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.295513 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8t5h" event={"ID":"f2ebdb93-c8ce-45c1-b10f-037853cc99d9","Type":"ContainerDied","Data":"424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa"} Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.295512 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8t5h" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.295550 4870 scope.go:117] "RemoveContainer" containerID="1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.325842 4870 scope.go:117] "RemoveContainer" containerID="ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.330952 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8t5h"] Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.339645 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8t5h"] Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.349252 4870 scope.go:117] "RemoveContainer" containerID="3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.394750 4870 scope.go:117] "RemoveContainer" containerID="1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8" Jan 30 08:25:22 crc kubenswrapper[4870]: E0130 08:25:22.395224 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8\": container with ID starting with 1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8 not found: ID does not exist" containerID="1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.395264 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8"} err="failed to get container status \"1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8\": rpc error: code = NotFound desc = could not find container \"1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8\": container with ID starting with 1b7258f0cb3b8d94cfbc81a8d15ba08d50e7cf8aabca07490fec1c5e1133bba8 not found: ID does not exist" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.395296 4870 scope.go:117] "RemoveContainer" containerID="ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9" Jan 30 08:25:22 crc kubenswrapper[4870]: E0130 08:25:22.395712 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9\": container with ID starting with ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9 not found: ID does not exist" containerID="ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.395742 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9"} err="failed to get container status \"ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9\": rpc error: code = NotFound desc = could not find container \"ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9\": container with ID starting with ae2c5a28d1054dd4f49367fe7abafb310d87c558c84f94dd0bfae47916a13ec9 not found: ID does not exist" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.395761 4870 scope.go:117] "RemoveContainer" containerID="3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f" Jan 30 08:25:22 crc kubenswrapper[4870]: E0130 08:25:22.396118 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f\": container with ID starting with 3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f not found: ID does not exist" containerID="3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.396148 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f"} err="failed to get container status \"3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f\": rpc error: code = NotFound desc = could not find container \"3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f\": container with ID starting with 3c926d70117bcd5ef2815b7cdbe130647e38bb075a7c19633b0d0c4561316c4f not found: ID does not exist" Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.433204 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4nh7"] Jan 30 08:25:22 crc kubenswrapper[4870]: I0130 08:25:22.433606 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4nh7" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="registry-server" containerID="cri-o://717be5686005143024a4b1fb0b55c3b441fc39613b640266654be36845057b0c" gracePeriod=2 Jan 30 08:25:24 crc kubenswrapper[4870]: I0130 08:25:24.096831 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" path="/var/lib/kubelet/pods/f2ebdb93-c8ce-45c1-b10f-037853cc99d9/volumes" Jan 30 08:25:24 crc kubenswrapper[4870]: I0130 08:25:24.313215 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerID="717be5686005143024a4b1fb0b55c3b441fc39613b640266654be36845057b0c" exitCode=0 Jan 30 08:25:24 crc kubenswrapper[4870]: I0130 08:25:24.313278 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerDied","Data":"717be5686005143024a4b1fb0b55c3b441fc39613b640266654be36845057b0c"} Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.250129 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.251048 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.514104 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.609507 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-catalog-content\") pod \"d7766425-f469-4513-b62b-e44e3d3f81bc\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.609652 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-utilities\") pod \"d7766425-f469-4513-b62b-e44e3d3f81bc\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.609722 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7z55\" (UniqueName: \"kubernetes.io/projected/d7766425-f469-4513-b62b-e44e3d3f81bc-kube-api-access-x7z55\") pod \"d7766425-f469-4513-b62b-e44e3d3f81bc\" (UID: \"d7766425-f469-4513-b62b-e44e3d3f81bc\") " Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.612047 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-utilities" (OuterVolumeSpecName: "utilities") pod "d7766425-f469-4513-b62b-e44e3d3f81bc" (UID: "d7766425-f469-4513-b62b-e44e3d3f81bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.614892 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7766425-f469-4513-b62b-e44e3d3f81bc-kube-api-access-x7z55" (OuterVolumeSpecName: "kube-api-access-x7z55") pod "d7766425-f469-4513-b62b-e44e3d3f81bc" (UID: "d7766425-f469-4513-b62b-e44e3d3f81bc"). InnerVolumeSpecName "kube-api-access-x7z55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.678618 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7766425-f469-4513-b62b-e44e3d3f81bc" (UID: "d7766425-f469-4513-b62b-e44e3d3f81bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.711755 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.711792 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7z55\" (UniqueName: \"kubernetes.io/projected/d7766425-f469-4513-b62b-e44e3d3f81bc-kube-api-access-x7z55\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:25 crc kubenswrapper[4870]: I0130 08:25:25.711807 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7766425-f469-4513-b62b-e44e3d3f81bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:26 crc kubenswrapper[4870]: E0130 08:25:26.175420 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa\": RecentStats: unable to find data in memory cache]" Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.337193 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4nh7" event={"ID":"d7766425-f469-4513-b62b-e44e3d3f81bc","Type":"ContainerDied","Data":"bf15e36614073a765c12f2a92a73edc04b67fdb7bd97803fa465e4ec3897310b"} Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.337843 4870 scope.go:117] "RemoveContainer" containerID="717be5686005143024a4b1fb0b55c3b441fc39613b640266654be36845057b0c" Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.337340 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4nh7" Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.404297 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4nh7"] Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.409756 4870 scope.go:117] "RemoveContainer" containerID="59dd3f065b079b1913a4f498086fdcee013f59a43677085570cb3e003e1cb9ad" Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.414114 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4nh7"] Jan 30 08:25:26 crc kubenswrapper[4870]: I0130 08:25:26.434429 4870 scope.go:117] "RemoveContainer" containerID="ced22f80742d52f01c7a2a77934c1cc85ecaeb0560f0d90e44530ff08ba4d9a1" Jan 30 08:25:28 crc kubenswrapper[4870]: I0130 08:25:28.092557 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" path="/var/lib/kubelet/pods/d7766425-f469-4513-b62b-e44e3d3f81bc/volumes" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.667356 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w8txq"] Jan 30 08:25:29 crc kubenswrapper[4870]: E0130 08:25:29.668025 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="registry-server" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668038 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="registry-server" Jan 30 08:25:29 crc kubenswrapper[4870]: E0130 08:25:29.668050 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="extract-utilities" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668056 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="extract-utilities" Jan 30 08:25:29 crc kubenswrapper[4870]: E0130 08:25:29.668069 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="extract-utilities" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668077 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="extract-utilities" Jan 30 08:25:29 crc kubenswrapper[4870]: E0130 08:25:29.668088 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="registry-server" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668094 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="registry-server" Jan 30 08:25:29 crc kubenswrapper[4870]: E0130 08:25:29.668110 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="extract-content" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668116 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="extract-content" Jan 30 08:25:29 crc kubenswrapper[4870]: E0130 08:25:29.668126 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="extract-content" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668132 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="extract-content" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668259 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ebdb93-c8ce-45c1-b10f-037853cc99d9" containerName="registry-server" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.668268 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7766425-f469-4513-b62b-e44e3d3f81bc" containerName="registry-server" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.669314 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.687290 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8txq"] Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.793274 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-catalog-content\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.793328 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fv4\" (UniqueName: \"kubernetes.io/projected/815f19a2-b916-4782-b47c-84c3e9f7256b-kube-api-access-q8fv4\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.793375 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-utilities\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.895078 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-utilities\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.895235 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-catalog-content\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.895265 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8fv4\" (UniqueName: \"kubernetes.io/projected/815f19a2-b916-4782-b47c-84c3e9f7256b-kube-api-access-q8fv4\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.895611 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-utilities\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.895971 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-catalog-content\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:29 crc kubenswrapper[4870]: I0130 08:25:29.919395 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8fv4\" (UniqueName: \"kubernetes.io/projected/815f19a2-b916-4782-b47c-84c3e9f7256b-kube-api-access-q8fv4\") pod \"community-operators-w8txq\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:30 crc kubenswrapper[4870]: I0130 08:25:30.001397 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:30 crc kubenswrapper[4870]: I0130 08:25:30.515447 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8txq"] Jan 30 08:25:30 crc kubenswrapper[4870]: W0130 08:25:30.524700 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod815f19a2_b916_4782_b47c_84c3e9f7256b.slice/crio-dffa78b9cc8ecf235e2db29e9a60513b5a7e5fad878eed0fcc2cadbd97531a8a WatchSource:0}: Error finding container dffa78b9cc8ecf235e2db29e9a60513b5a7e5fad878eed0fcc2cadbd97531a8a: Status 404 returned error can't find the container with id dffa78b9cc8ecf235e2db29e9a60513b5a7e5fad878eed0fcc2cadbd97531a8a Jan 30 08:25:31 crc kubenswrapper[4870]: I0130 08:25:31.390962 4870 generic.go:334] "Generic (PLEG): container finished" podID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerID="f0ec955f8bd758261b3b0837ffe61d337bbaee4c2b09b72c8be041200568c84d" exitCode=0 Jan 30 08:25:31 crc kubenswrapper[4870]: I0130 08:25:31.391042 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerDied","Data":"f0ec955f8bd758261b3b0837ffe61d337bbaee4c2b09b72c8be041200568c84d"} Jan 30 08:25:31 crc kubenswrapper[4870]: I0130 08:25:31.392209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerStarted","Data":"dffa78b9cc8ecf235e2db29e9a60513b5a7e5fad878eed0fcc2cadbd97531a8a"} Jan 30 08:25:32 crc kubenswrapper[4870]: I0130 08:25:32.401961 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" event={"ID":"b706cc39-6af6-4a91-b2a2-6160148dadae","Type":"ContainerStarted","Data":"41ae688cfff4407f497ec5966bb3d9b09c4f68a691b626233e760ff995c8fab9"} Jan 30 08:25:32 crc kubenswrapper[4870]: I0130 08:25:32.404588 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" event={"ID":"d6956410-92c0-40bf-b1c1-a3353ccf1bbc","Type":"ContainerStarted","Data":"60cadaf796b491d2cdd2165fdf687979e8eeb93a44e6764c727b1055c0524514"} Jan 30 08:25:32 crc kubenswrapper[4870]: I0130 08:25:32.404926 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:25:32 crc kubenswrapper[4870]: I0130 08:25:32.409732 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerStarted","Data":"f34b58d15d78e4ed6c3d87597e6195d81d3ae51c781f14d73eeda3ad6761a6b7"} Jan 30 08:25:32 crc kubenswrapper[4870]: I0130 08:25:32.427756 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sds6v" podStartSLOduration=3.284596121 podStartE2EDuration="1m5.427728577s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.42288918 +0000 UTC m=+908.118436289" lastFinishedPulling="2026-01-30 08:25:31.566021626 +0000 UTC m=+970.261568745" observedRunningTime="2026-01-30 08:25:32.424940439 +0000 UTC m=+971.120487568" watchObservedRunningTime="2026-01-30 08:25:32.427728577 +0000 UTC m=+971.123275686" Jan 30 08:25:32 crc kubenswrapper[4870]: I0130 08:25:32.484080 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" podStartSLOduration=2.616284786 podStartE2EDuration="1m5.484053337s" podCreationTimestamp="2026-01-30 08:24:27 +0000 UTC" firstStartedPulling="2026-01-30 08:24:29.300832996 +0000 UTC m=+907.996380105" lastFinishedPulling="2026-01-30 08:25:32.168601537 +0000 UTC m=+970.864148656" observedRunningTime="2026-01-30 08:25:32.474340601 +0000 UTC m=+971.169887740" watchObservedRunningTime="2026-01-30 08:25:32.484053337 +0000 UTC m=+971.179600486" Jan 30 08:25:33 crc kubenswrapper[4870]: I0130 08:25:33.423195 4870 generic.go:334] "Generic (PLEG): container finished" podID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerID="f34b58d15d78e4ed6c3d87597e6195d81d3ae51c781f14d73eeda3ad6761a6b7" exitCode=0 Jan 30 08:25:33 crc kubenswrapper[4870]: I0130 08:25:33.423285 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerDied","Data":"f34b58d15d78e4ed6c3d87597e6195d81d3ae51c781f14d73eeda3ad6761a6b7"} Jan 30 08:25:34 crc kubenswrapper[4870]: I0130 08:25:34.434107 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerStarted","Data":"fb6633fc11d8cb9a25a5e95a83f794bb99d01e445ef450402b6e145aadfc35b8"} Jan 30 08:25:34 crc kubenswrapper[4870]: I0130 08:25:34.465516 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w8txq" podStartSLOduration=3.044836489 podStartE2EDuration="5.465491656s" podCreationTimestamp="2026-01-30 08:25:29 +0000 UTC" firstStartedPulling="2026-01-30 08:25:31.393159636 +0000 UTC m=+970.088706775" lastFinishedPulling="2026-01-30 08:25:33.813814803 +0000 UTC m=+972.509361942" observedRunningTime="2026-01-30 08:25:34.456014439 +0000 UTC m=+973.151561568" watchObservedRunningTime="2026-01-30 08:25:34.465491656 +0000 UTC m=+973.161038785" Jan 30 08:25:36 crc kubenswrapper[4870]: E0130 08:25:36.387101 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa\": RecentStats: unable to find data in memory cache]" Jan 30 08:25:38 crc kubenswrapper[4870]: I0130 08:25:38.417187 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7b7dd57594-2p68v" Jan 30 08:25:40 crc kubenswrapper[4870]: I0130 08:25:40.002444 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:40 crc kubenswrapper[4870]: I0130 08:25:40.002505 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:40 crc kubenswrapper[4870]: I0130 08:25:40.059229 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:40 crc kubenswrapper[4870]: I0130 08:25:40.652848 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:41 crc kubenswrapper[4870]: I0130 08:25:41.028393 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8txq"] Jan 30 08:25:42 crc kubenswrapper[4870]: I0130 08:25:42.598577 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w8txq" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="registry-server" containerID="cri-o://fb6633fc11d8cb9a25a5e95a83f794bb99d01e445ef450402b6e145aadfc35b8" gracePeriod=2 Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.623436 4870 generic.go:334] "Generic (PLEG): container finished" podID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerID="fb6633fc11d8cb9a25a5e95a83f794bb99d01e445ef450402b6e145aadfc35b8" exitCode=0 Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.623774 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerDied","Data":"fb6633fc11d8cb9a25a5e95a83f794bb99d01e445ef450402b6e145aadfc35b8"} Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.907915 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.981181 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-utilities\") pod \"815f19a2-b916-4782-b47c-84c3e9f7256b\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.981257 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fv4\" (UniqueName: \"kubernetes.io/projected/815f19a2-b916-4782-b47c-84c3e9f7256b-kube-api-access-q8fv4\") pod \"815f19a2-b916-4782-b47c-84c3e9f7256b\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.981282 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-catalog-content\") pod \"815f19a2-b916-4782-b47c-84c3e9f7256b\" (UID: \"815f19a2-b916-4782-b47c-84c3e9f7256b\") " Jan 30 08:25:43 crc kubenswrapper[4870]: I0130 08:25:43.982178 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-utilities" (OuterVolumeSpecName: "utilities") pod "815f19a2-b916-4782-b47c-84c3e9f7256b" (UID: "815f19a2-b916-4782-b47c-84c3e9f7256b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.011285 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815f19a2-b916-4782-b47c-84c3e9f7256b-kube-api-access-q8fv4" (OuterVolumeSpecName: "kube-api-access-q8fv4") pod "815f19a2-b916-4782-b47c-84c3e9f7256b" (UID: "815f19a2-b916-4782-b47c-84c3e9f7256b"). InnerVolumeSpecName "kube-api-access-q8fv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.082382 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.082409 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8fv4\" (UniqueName: \"kubernetes.io/projected/815f19a2-b916-4782-b47c-84c3e9f7256b-kube-api-access-q8fv4\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.365682 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "815f19a2-b916-4782-b47c-84c3e9f7256b" (UID: "815f19a2-b916-4782-b47c-84c3e9f7256b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.388325 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815f19a2-b916-4782-b47c-84c3e9f7256b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.640091 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8txq" event={"ID":"815f19a2-b916-4782-b47c-84c3e9f7256b","Type":"ContainerDied","Data":"dffa78b9cc8ecf235e2db29e9a60513b5a7e5fad878eed0fcc2cadbd97531a8a"} Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.640155 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8txq" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.640206 4870 scope.go:117] "RemoveContainer" containerID="fb6633fc11d8cb9a25a5e95a83f794bb99d01e445ef450402b6e145aadfc35b8" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.673573 4870 scope.go:117] "RemoveContainer" containerID="f34b58d15d78e4ed6c3d87597e6195d81d3ae51c781f14d73eeda3ad6761a6b7" Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.705751 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8txq"] Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.714072 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w8txq"] Jan 30 08:25:44 crc kubenswrapper[4870]: I0130 08:25:44.729071 4870 scope.go:117] "RemoveContainer" containerID="f0ec955f8bd758261b3b0837ffe61d337bbaee4c2b09b72c8be041200568c84d" Jan 30 08:25:46 crc kubenswrapper[4870]: I0130 08:25:46.088612 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" path="/var/lib/kubelet/pods/815f19a2-b916-4782-b47c-84c3e9f7256b/volumes" Jan 30 08:25:46 crc kubenswrapper[4870]: E0130 08:25:46.649534 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa\": RecentStats: unable to find data in memory cache]" Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.250249 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.250901 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.250972 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.251968 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"902c9bb8b96922377d8d6da6fb79b392e9bc4a710daf7c3c1a77d5b9c2b536ac"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.252067 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://902c9bb8b96922377d8d6da6fb79b392e9bc4a710daf7c3c1a77d5b9c2b536ac" gracePeriod=600 Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.729655 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="902c9bb8b96922377d8d6da6fb79b392e9bc4a710daf7c3c1a77d5b9c2b536ac" exitCode=0 Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.729729 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"902c9bb8b96922377d8d6da6fb79b392e9bc4a710daf7c3c1a77d5b9c2b536ac"} Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.730005 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"736e1ea4b0b2b4fa81dc9ec4fa9950e05f221b62734f7e8de7d7969e9158f7ae"} Jan 30 08:25:55 crc kubenswrapper[4870]: I0130 08:25:55.730027 4870 scope.go:117] "RemoveContainer" containerID="8f05305445b605660ea999aab22b621a1da0c30929b1af3251f46624decd30be" Jan 30 08:25:56 crc kubenswrapper[4870]: E0130 08:25:56.841411 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa\": RecentStats: unable to find data in memory cache]" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.192477 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-n8lzz"] Jan 30 08:26:00 crc kubenswrapper[4870]: E0130 08:26:00.193453 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="extract-content" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.193470 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="extract-content" Jan 30 08:26:00 crc kubenswrapper[4870]: E0130 08:26:00.193484 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="registry-server" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.193492 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="registry-server" Jan 30 08:26:00 crc kubenswrapper[4870]: E0130 08:26:00.193507 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="extract-utilities" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.193516 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="extract-utilities" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.193699 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="815f19a2-b916-4782-b47c-84c3e9f7256b" containerName="registry-server" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.194699 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.197769 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.197857 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.197940 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.198555 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-52wst" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.200451 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-n8lzz"] Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.241487 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-config\") pod \"dnsmasq-dns-9cd4f5bf5-n8lzz\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.241542 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwpxs\" (UniqueName: \"kubernetes.io/projected/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-kube-api-access-bwpxs\") pod \"dnsmasq-dns-9cd4f5bf5-n8lzz\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.252176 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-btgvj"] Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.254131 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.256245 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.265245 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-btgvj"] Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.343056 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjl4f\" (UniqueName: \"kubernetes.io/projected/c135f9a2-386b-4108-a40d-a703e4d72b13-kube-api-access-hjl4f\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.343149 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-config\") pod \"dnsmasq-dns-9cd4f5bf5-n8lzz\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.343185 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwpxs\" (UniqueName: \"kubernetes.io/projected/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-kube-api-access-bwpxs\") pod \"dnsmasq-dns-9cd4f5bf5-n8lzz\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.343209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-config\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.343230 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-dns-svc\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.344048 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-config\") pod \"dnsmasq-dns-9cd4f5bf5-n8lzz\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.360568 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwpxs\" (UniqueName: \"kubernetes.io/projected/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-kube-api-access-bwpxs\") pod \"dnsmasq-dns-9cd4f5bf5-n8lzz\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.444466 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-config\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.444688 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-dns-svc\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.444867 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjl4f\" (UniqueName: \"kubernetes.io/projected/c135f9a2-386b-4108-a40d-a703e4d72b13-kube-api-access-hjl4f\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.445411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-dns-svc\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.445415 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-config\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.466592 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjl4f\" (UniqueName: \"kubernetes.io/projected/c135f9a2-386b-4108-a40d-a703e4d72b13-kube-api-access-hjl4f\") pod \"dnsmasq-dns-6dd95798b9-btgvj\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.516596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:00 crc kubenswrapper[4870]: I0130 08:26:00.572459 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:01 crc kubenswrapper[4870]: I0130 08:26:01.044033 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-btgvj"] Jan 30 08:26:01 crc kubenswrapper[4870]: W0130 08:26:01.130730 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c2e9ac8_fed6_4f2e_9a1b_26e0b253a3d2.slice/crio-954aad0a8aef2531b8fe421c82e8e2a545985a769437ed6265789109329f05be WatchSource:0}: Error finding container 954aad0a8aef2531b8fe421c82e8e2a545985a769437ed6265789109329f05be: Status 404 returned error can't find the container with id 954aad0a8aef2531b8fe421c82e8e2a545985a769437ed6265789109329f05be Jan 30 08:26:01 crc kubenswrapper[4870]: I0130 08:26:01.132181 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-n8lzz"] Jan 30 08:26:01 crc kubenswrapper[4870]: I0130 08:26:01.782339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" event={"ID":"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2","Type":"ContainerStarted","Data":"954aad0a8aef2531b8fe421c82e8e2a545985a769437ed6265789109329f05be"} Jan 30 08:26:01 crc kubenswrapper[4870]: I0130 08:26:01.783909 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" event={"ID":"c135f9a2-386b-4108-a40d-a703e4d72b13","Type":"ContainerStarted","Data":"85ed67ebbca181d72c31532b0492443d898520cd3d920466a779ed625e90274a"} Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.010779 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-btgvj"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.030699 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-bp9gj"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.032079 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.048535 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-bp9gj"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.224739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26wbn\" (UniqueName: \"kubernetes.io/projected/f491adde-145d-44fc-9414-0fd92c41a114-kube-api-access-26wbn\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.224954 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-dns-svc\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.225031 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-config\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.328226 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-dns-svc\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.328477 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-config\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.328507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26wbn\" (UniqueName: \"kubernetes.io/projected/f491adde-145d-44fc-9414-0fd92c41a114-kube-api-access-26wbn\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.329613 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-dns-svc\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.331161 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-config\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.346828 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-n8lzz"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.360106 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26wbn\" (UniqueName: \"kubernetes.io/projected/f491adde-145d-44fc-9414-0fd92c41a114-kube-api-access-26wbn\") pod \"dnsmasq-dns-64594fd94f-bp9gj\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.406669 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-n69v7"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.417513 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.429725 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-config\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.429825 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd26p\" (UniqueName: \"kubernetes.io/projected/033dbc66-0baa-46b3-8fda-3881303e4e40-kube-api-access-zd26p\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.429857 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-dns-svc\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.433248 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-n69v7"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.530621 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-config\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.530703 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd26p\" (UniqueName: \"kubernetes.io/projected/033dbc66-0baa-46b3-8fda-3881303e4e40-kube-api-access-zd26p\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.530730 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-dns-svc\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.531473 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-dns-svc\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.532003 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-config\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.551608 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd26p\" (UniqueName: \"kubernetes.io/projected/033dbc66-0baa-46b3-8fda-3881303e4e40-kube-api-access-zd26p\") pod \"dnsmasq-dns-7d56d856cf-n69v7\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.648662 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.748464 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-bp9gj"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.766847 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.767111 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57467f675c-j7lcp"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.768175 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.785822 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-j7lcp"] Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.835141 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkdd\" (UniqueName: \"kubernetes.io/projected/a7cb8c3d-2157-4c52-a196-24d514b098ee-kube-api-access-6pkdd\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.835215 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-dns-svc\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.835248 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-config\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.936498 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkdd\" (UniqueName: \"kubernetes.io/projected/a7cb8c3d-2157-4c52-a196-24d514b098ee-kube-api-access-6pkdd\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.937476 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-dns-svc\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.937508 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-config\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.939204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-config\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.939371 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-dns-svc\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:04 crc kubenswrapper[4870]: I0130 08:26:04.971813 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkdd\" (UniqueName: \"kubernetes.io/projected/a7cb8c3d-2157-4c52-a196-24d514b098ee-kube-api-access-6pkdd\") pod \"dnsmasq-dns-57467f675c-j7lcp\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.096030 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.178551 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.179949 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.182579 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.183204 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.183438 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.185734 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lwd7k" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.185744 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.185909 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.186222 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.192258 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343600 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343641 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97f21b9d-25bf-4a64-94ef-51d83b662ab3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343701 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343736 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9pp\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-kube-api-access-mn9pp\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343786 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97f21b9d-25bf-4a64-94ef-51d83b662ab3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343799 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-config-data\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343822 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343851 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343867 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.343905 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.444798 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.444843 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.444865 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97f21b9d-25bf-4a64-94ef-51d83b662ab3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.444920 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.444960 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9pp\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-kube-api-access-mn9pp\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.444992 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445011 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97f21b9d-25bf-4a64-94ef-51d83b662ab3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445028 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-config-data\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445052 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445102 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445449 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445470 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.445527 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.446333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.446778 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.447528 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-config-data\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.449602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.450229 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97f21b9d-25bf-4a64-94ef-51d83b662ab3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.450428 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97f21b9d-25bf-4a64-94ef-51d83b662ab3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.452819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.465153 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9pp\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-kube-api-access-mn9pp\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.471490 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.511857 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.526330 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.528475 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.530670 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.530853 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.531340 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.531520 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.531744 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.532776 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.532814 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hr5rb" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.551092 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648202 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648295 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648433 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648495 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4d8r\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-kube-api-access-g4d8r\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648533 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648559 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648667 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648723 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648747 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.648832 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750380 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750432 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750464 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750491 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750514 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750550 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750594 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750653 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750689 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.750715 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4d8r\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-kube-api-access-g4d8r\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.751087 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.751166 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.751279 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.751933 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.752196 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.752516 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.758714 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.759537 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.762379 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.764940 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.776803 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.779956 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4d8r\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-kube-api-access-g4d8r\") pod \"rabbitmq-cell1-server-0\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.867928 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.894388 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.895603 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.899159 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.899641 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.899974 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-lrn99" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.900130 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.900348 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.901310 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.901624 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 30 08:26:05 crc kubenswrapper[4870]: I0130 08:26:05.905413 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054147 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054188 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054218 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054239 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ab884a9-b47a-476a-8f89-140093b96527-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054272 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054317 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054498 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpv8h\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-kube-api-access-tpv8h\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054566 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054633 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ab884a9-b47a-476a-8f89-140093b96527-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.054729 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156355 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156405 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156430 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ab884a9-b47a-476a-8f89-140093b96527-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156465 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156488 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156531 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpv8h\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-kube-api-access-tpv8h\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156548 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156570 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156592 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ab884a9-b47a-476a-8f89-140093b96527-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156607 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156630 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.156754 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.158287 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.158335 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.159163 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.159259 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.159276 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ab884a9-b47a-476a-8f89-140093b96527-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.163200 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ab884a9-b47a-476a-8f89-140093b96527-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.166519 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.177182 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.177250 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ab884a9-b47a-476a-8f89-140093b96527-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.182566 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpv8h\" (UniqueName: \"kubernetes.io/projected/2ab884a9-b47a-476a-8f89-140093b96527-kube-api-access-tpv8h\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.194962 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"2ab884a9-b47a-476a-8f89-140093b96527\") " pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:06 crc kubenswrapper[4870]: I0130 08:26:06.227385 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:26:07 crc kubenswrapper[4870]: E0130 08:26:07.024368 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa\": RecentStats: unable to find data in memory cache]" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.209242 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.210676 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.213674 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x4qpj" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.216190 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.217047 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.227864 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.230266 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.255244 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374264 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374360 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zcm4\" (UniqueName: \"kubernetes.io/projected/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-kube-api-access-2zcm4\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374408 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374429 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374474 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374512 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374577 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.374603 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.475726 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.475777 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.475806 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.475847 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zcm4\" (UniqueName: \"kubernetes.io/projected/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-kube-api-access-2zcm4\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.475904 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.475945 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.476004 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.476043 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.476354 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.477570 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.477754 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.479560 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.479774 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.484801 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.485437 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.498163 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.501015 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zcm4\" (UniqueName: \"kubernetes.io/projected/2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a-kube-api-access-2zcm4\") pod \"openstack-galera-0\" (UID: \"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a\") " pod="openstack/openstack-galera-0" Jan 30 08:26:07 crc kubenswrapper[4870]: I0130 08:26:07.543117 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.741319 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.743026 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.748672 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mqkt7" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.751109 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.751424 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.754679 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.765851 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.796720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31607550-5ccc-4b0b-9fbd-18007a61dcff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.796853 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31607550-5ccc-4b0b-9fbd-18007a61dcff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.796961 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.797054 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31607550-5ccc-4b0b-9fbd-18007a61dcff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.798977 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.799024 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.799132 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.799222 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4xc\" (UniqueName: \"kubernetes.io/projected/31607550-5ccc-4b0b-9fbd-18007a61dcff-kube-api-access-xv4xc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.900818 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.900914 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4xc\" (UniqueName: \"kubernetes.io/projected/31607550-5ccc-4b0b-9fbd-18007a61dcff-kube-api-access-xv4xc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.900959 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31607550-5ccc-4b0b-9fbd-18007a61dcff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31607550-5ccc-4b0b-9fbd-18007a61dcff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901054 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901093 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31607550-5ccc-4b0b-9fbd-18007a61dcff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901129 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901157 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901448 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.901679 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31607550-5ccc-4b0b-9fbd-18007a61dcff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.902176 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.903361 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31607550-5ccc-4b0b-9fbd-18007a61dcff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.905679 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31607550-5ccc-4b0b-9fbd-18007a61dcff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.907831 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31607550-5ccc-4b0b-9fbd-18007a61dcff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.929175 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4xc\" (UniqueName: \"kubernetes.io/projected/31607550-5ccc-4b0b-9fbd-18007a61dcff-kube-api-access-xv4xc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:08 crc kubenswrapper[4870]: I0130 08:26:08.932258 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"31607550-5ccc-4b0b-9fbd-18007a61dcff\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.041954 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.043128 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.045406 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.045602 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.051999 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.055733 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fbd9c" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.105273 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsnf\" (UniqueName: \"kubernetes.io/projected/d691b652-0077-4709-9e9d-16b87c8d3d3c-kube-api-access-qpsnf\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.105327 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d691b652-0077-4709-9e9d-16b87c8d3d3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.105480 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691b652-0077-4709-9e9d-16b87c8d3d3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.105671 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d691b652-0077-4709-9e9d-16b87c8d3d3c-config-data\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.105835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d691b652-0077-4709-9e9d-16b87c8d3d3c-kolla-config\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.110911 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.207175 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d691b652-0077-4709-9e9d-16b87c8d3d3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.207369 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691b652-0077-4709-9e9d-16b87c8d3d3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.207475 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d691b652-0077-4709-9e9d-16b87c8d3d3c-config-data\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.207602 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d691b652-0077-4709-9e9d-16b87c8d3d3c-kolla-config\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.207650 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsnf\" (UniqueName: \"kubernetes.io/projected/d691b652-0077-4709-9e9d-16b87c8d3d3c-kube-api-access-qpsnf\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.209197 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d691b652-0077-4709-9e9d-16b87c8d3d3c-kolla-config\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.209204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d691b652-0077-4709-9e9d-16b87c8d3d3c-config-data\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.214330 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d691b652-0077-4709-9e9d-16b87c8d3d3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.218511 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d691b652-0077-4709-9e9d-16b87c8d3d3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.238715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsnf\" (UniqueName: \"kubernetes.io/projected/d691b652-0077-4709-9e9d-16b87c8d3d3c-kube-api-access-qpsnf\") pod \"memcached-0\" (UID: \"d691b652-0077-4709-9e9d-16b87c8d3d3c\") " pod="openstack/memcached-0" Jan 30 08:26:09 crc kubenswrapper[4870]: I0130 08:26:09.358932 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 08:26:10 crc kubenswrapper[4870]: I0130 08:26:10.780420 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:26:10 crc kubenswrapper[4870]: I0130 08:26:10.781492 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:26:10 crc kubenswrapper[4870]: I0130 08:26:10.787770 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5g22f" Jan 30 08:26:10 crc kubenswrapper[4870]: I0130 08:26:10.798376 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:26:10 crc kubenswrapper[4870]: I0130 08:26:10.848731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckp8w\" (UniqueName: \"kubernetes.io/projected/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67-kube-api-access-ckp8w\") pod \"kube-state-metrics-0\" (UID: \"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67\") " pod="openstack/kube-state-metrics-0" Jan 30 08:26:10 crc kubenswrapper[4870]: I0130 08:26:10.949629 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckp8w\" (UniqueName: \"kubernetes.io/projected/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67-kube-api-access-ckp8w\") pod \"kube-state-metrics-0\" (UID: \"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67\") " pod="openstack/kube-state-metrics-0" Jan 30 08:26:11 crc kubenswrapper[4870]: I0130 08:26:11.002742 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckp8w\" (UniqueName: \"kubernetes.io/projected/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67-kube-api-access-ckp8w\") pod \"kube-state-metrics-0\" (UID: \"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67\") " pod="openstack/kube-state-metrics-0" Jan 30 08:26:11 crc kubenswrapper[4870]: I0130 08:26:11.095903 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.274322 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.276449 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.279362 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.279565 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.280629 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.280754 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.280930 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.281046 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.281063 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-88lql" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.285125 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.299313 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372116 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5p8\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-kube-api-access-9z5p8\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372194 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372235 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372265 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372316 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372345 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372378 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372433 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372461 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.372506 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473443 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5p8\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-kube-api-access-9z5p8\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473524 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473566 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473600 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473678 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473725 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473772 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473931 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.473985 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.474040 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.475362 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.476051 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.476149 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.478349 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.478898 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.478938 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b608408b27cf3925c08af2a9b3a133a2b5eb87db3a290a5641371b0533b7f7d2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.478905 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.479509 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.487617 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.488280 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.500228 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5p8\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-kube-api-access-9z5p8\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.512429 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:12 crc kubenswrapper[4870]: I0130 08:26:12.613396 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.446352 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rwchz"] Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.447967 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.450826 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zmt2p" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.451216 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.451397 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.455339 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gznh8"] Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.457423 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.462569 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rwchz"] Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.470406 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gznh8"] Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519005 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-log\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519347 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpss8\" (UniqueName: \"kubernetes.io/projected/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-kube-api-access-dpss8\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519422 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-etc-ovs\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519466 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-log-ovn\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-run-ovn\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519598 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-run\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496b707b-8de6-4228-b4fd-a48f3709586c-combined-ca-bundle\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519803 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-scripts\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519870 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-lib\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.519966 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-run\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.520064 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/496b707b-8de6-4228-b4fd-a48f3709586c-scripts\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.520105 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/496b707b-8de6-4228-b4fd-a48f3709586c-ovn-controller-tls-certs\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.520175 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2p6x\" (UniqueName: \"kubernetes.io/projected/496b707b-8de6-4228-b4fd-a48f3709586c-kube-api-access-m2p6x\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.547464 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-n69v7"] Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621560 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-lib\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621608 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-run\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621643 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/496b707b-8de6-4228-b4fd-a48f3709586c-scripts\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621663 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/496b707b-8de6-4228-b4fd-a48f3709586c-ovn-controller-tls-certs\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621694 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2p6x\" (UniqueName: \"kubernetes.io/projected/496b707b-8de6-4228-b4fd-a48f3709586c-kube-api-access-m2p6x\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-log\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpss8\" (UniqueName: \"kubernetes.io/projected/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-kube-api-access-dpss8\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621772 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-etc-ovs\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621796 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-log-ovn\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621818 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-run-ovn\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621847 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-run\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621864 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496b707b-8de6-4228-b4fd-a48f3709586c-combined-ca-bundle\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.621950 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-scripts\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.622162 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-etc-ovs\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.624484 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-scripts\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.624484 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/496b707b-8de6-4228-b4fd-a48f3709586c-scripts\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.627176 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-run-ovn\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.627628 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-run\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.627680 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-log\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.627707 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-run\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.627769 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-var-lib\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.627760 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/496b707b-8de6-4228-b4fd-a48f3709586c-var-log-ovn\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.629916 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496b707b-8de6-4228-b4fd-a48f3709586c-combined-ca-bundle\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.638605 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/496b707b-8de6-4228-b4fd-a48f3709586c-ovn-controller-tls-certs\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.640222 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2p6x\" (UniqueName: \"kubernetes.io/projected/496b707b-8de6-4228-b4fd-a48f3709586c-kube-api-access-m2p6x\") pod \"ovn-controller-rwchz\" (UID: \"496b707b-8de6-4228-b4fd-a48f3709586c\") " pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.645621 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpss8\" (UniqueName: \"kubernetes.io/projected/b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2-kube-api-access-dpss8\") pod \"ovn-controller-ovs-gznh8\" (UID: \"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2\") " pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.779694 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz" Jan 30 08:26:14 crc kubenswrapper[4870]: I0130 08:26:14.792614 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.319640 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.321612 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.324601 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-72pm2" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.328060 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.328361 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.331333 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.334470 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.337679 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435271 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/625f2d84-6699-4e9f-881e-e96509760e9d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435339 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/625f2d84-6699-4e9f-881e-e96509760e9d-config\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435508 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxrq\" (UniqueName: \"kubernetes.io/projected/625f2d84-6699-4e9f-881e-e96509760e9d-kube-api-access-vmxrq\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435610 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435681 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/625f2d84-6699-4e9f-881e-e96509760e9d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435807 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.435904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537202 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537277 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537323 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/625f2d84-6699-4e9f-881e-e96509760e9d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537361 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/625f2d84-6699-4e9f-881e-e96509760e9d-config\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537401 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxrq\" (UniqueName: \"kubernetes.io/projected/625f2d84-6699-4e9f-881e-e96509760e9d-kube-api-access-vmxrq\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537438 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537470 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.537494 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/625f2d84-6699-4e9f-881e-e96509760e9d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.538202 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/625f2d84-6699-4e9f-881e-e96509760e9d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.538614 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.541441 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/625f2d84-6699-4e9f-881e-e96509760e9d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.542764 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/625f2d84-6699-4e9f-881e-e96509760e9d-config\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.543449 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.548247 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.556471 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625f2d84-6699-4e9f-881e-e96509760e9d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.557292 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxrq\" (UniqueName: \"kubernetes.io/projected/625f2d84-6699-4e9f-881e-e96509760e9d-kube-api-access-vmxrq\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.565599 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"625f2d84-6699-4e9f-881e-e96509760e9d\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:15 crc kubenswrapper[4870]: I0130 08:26:15.684966 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:17 crc kubenswrapper[4870]: E0130 08:26:17.286027 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ebdb93_c8ce_45c1_b10f_037853cc99d9.slice/crio-424c2889668056eeecef60e4a12c02f8812311e06dd82e3b2b01529aeee939aa\": RecentStats: unable to find data in memory cache]" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.487717 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.489283 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.494686 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.494953 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.495045 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wgfkw" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.495236 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.504894 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608215 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608344 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7h5\" (UniqueName: \"kubernetes.io/projected/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-kube-api-access-xw7h5\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608537 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608613 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.608760 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-config\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712093 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712158 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712199 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712275 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712306 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-config\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712330 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712388 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712420 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7h5\" (UniqueName: \"kubernetes.io/projected/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-kube-api-access-xw7h5\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.712514 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.713093 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.713619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.714061 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-config\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.722851 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.723010 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.723059 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.756783 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7h5\" (UniqueName: \"kubernetes.io/projected/e9a5fd23-1240-4284-91cf-b57f4b2e3d02-kube-api-access-xw7h5\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.767708 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e9a5fd23-1240-4284-91cf-b57f4b2e3d02\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:18 crc kubenswrapper[4870]: I0130 08:26:18.824912 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:19 crc kubenswrapper[4870]: W0130 08:26:19.137679 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod033dbc66_0baa_46b3_8fda_3881303e4e40.slice/crio-be7d1aa3f3670be6d94e4a7facc9d6deaa6f49ecc11c230b4a15c8f46a3117b6 WatchSource:0}: Error finding container be7d1aa3f3670be6d94e4a7facc9d6deaa6f49ecc11c230b4a15c8f46a3117b6: Status 404 returned error can't find the container with id be7d1aa3f3670be6d94e4a7facc9d6deaa6f49ecc11c230b4a15c8f46a3117b6 Jan 30 08:26:19 crc kubenswrapper[4870]: I0130 08:26:19.992807 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" event={"ID":"033dbc66-0baa-46b3-8fda-3881303e4e40","Type":"ContainerStarted","Data":"be7d1aa3f3670be6d94e4a7facc9d6deaa6f49ecc11c230b4a15c8f46a3117b6"} Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.048315 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.048378 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.048495 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjl4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6dd95798b9-btgvj_openstack(c135f9a2-386b-4108-a40d-a703e4d72b13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.049655 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" podUID="c135f9a2-386b-4108-a40d-a703e4d72b13" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.097183 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.097237 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.097349 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwpxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-9cd4f5bf5-n8lzz_openstack(7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:26:20 crc kubenswrapper[4870]: E0130 08:26:20.098500 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" podUID="7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2" Jan 30 08:26:20 crc kubenswrapper[4870]: I0130 08:26:20.400151 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:26:20 crc kubenswrapper[4870]: I0130 08:26:20.649775 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:26:20 crc kubenswrapper[4870]: W0130 08:26:20.656500 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd691b652_0077_4709_9e9d_16b87c8d3d3c.slice/crio-af6261a516f97db7788d12843796598bca22d6a5607376e0bbfd238a94ac03f5 WatchSource:0}: Error finding container af6261a516f97db7788d12843796598bca22d6a5607376e0bbfd238a94ac03f5: Status 404 returned error can't find the container with id af6261a516f97db7788d12843796598bca22d6a5607376e0bbfd238a94ac03f5 Jan 30 08:26:20 crc kubenswrapper[4870]: I0130 08:26:20.656647 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 08:26:20 crc kubenswrapper[4870]: W0130 08:26:20.659174 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd3b1e9c_90bb_46b7_8e19_edc1388b2a67.slice/crio-e0a5d30d3c77180ecb50a167671b3d1f7955018f6057ec5124abb113c1fa8b6f WatchSource:0}: Error finding container e0a5d30d3c77180ecb50a167671b3d1f7955018f6057ec5124abb113c1fa8b6f: Status 404 returned error can't find the container with id e0a5d30d3c77180ecb50a167671b3d1f7955018f6057ec5124abb113c1fa8b6f Jan 30 08:26:20 crc kubenswrapper[4870]: W0130 08:26:20.660778 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce45bb8_e721_40bb_a9fb_ac0d6b0deb4a.slice/crio-b4a40d16fc5fb149c0fc174da87c73a5c696d7b815a99e3d16fd9b278a7ad8d2 WatchSource:0}: Error finding container b4a40d16fc5fb149c0fc174da87c73a5c696d7b815a99e3d16fd9b278a7ad8d2: Status 404 returned error can't find the container with id b4a40d16fc5fb149c0fc174da87c73a5c696d7b815a99e3d16fd9b278a7ad8d2 Jan 30 08:26:20 crc kubenswrapper[4870]: I0130 08:26:20.662930 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:26:20 crc kubenswrapper[4870]: I0130 08:26:20.668479 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:26:20 crc kubenswrapper[4870]: I0130 08:26:20.674717 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.000622 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"97f21b9d-25bf-4a64-94ef-51d83b662ab3","Type":"ContainerStarted","Data":"e007871b6d10423ef6514301a7948e0b65aeec9e801d811cb06f4a5040316a29"} Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.001718 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67","Type":"ContainerStarted","Data":"e0a5d30d3c77180ecb50a167671b3d1f7955018f6057ec5124abb113c1fa8b6f"} Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.003153 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d691b652-0077-4709-9e9d-16b87c8d3d3c","Type":"ContainerStarted","Data":"af6261a516f97db7788d12843796598bca22d6a5607376e0bbfd238a94ac03f5"} Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.004101 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31607550-5ccc-4b0b-9fbd-18007a61dcff","Type":"ContainerStarted","Data":"99ab441e631a7a34bd07c114bbba4517a1c594a32c4de2267edc0cab42d7f3d5"} Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.005069 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd","Type":"ContainerStarted","Data":"3f0499acc4a6b0c8f2d313af1131c23462a36b6d1d5cfab2eb6312a0f9c1c357"} Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.006935 4870 generic.go:334] "Generic (PLEG): container finished" podID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerID="fa2967a57d46eadef06dee8c0ac950d7fe80b8807314b92bceff366e74a2aaa8" exitCode=0 Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.007008 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" event={"ID":"033dbc66-0baa-46b3-8fda-3881303e4e40","Type":"ContainerDied","Data":"fa2967a57d46eadef06dee8c0ac950d7fe80b8807314b92bceff366e74a2aaa8"} Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.009506 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a","Type":"ContainerStarted","Data":"b4a40d16fc5fb149c0fc174da87c73a5c696d7b815a99e3d16fd9b278a7ad8d2"} Jan 30 08:26:21 crc kubenswrapper[4870]: E0130 08:26:21.066975 4870 mount_linux.go:282] Mount failed: exit status 32 Jan 30 08:26:21 crc kubenswrapper[4870]: Mounting command: mount Jan 30 08:26:21 crc kubenswrapper[4870]: Mounting arguments: --no-canonicalize -o bind /proc/4870/fd/26 /var/lib/kubelet/pods/033dbc66-0baa-46b3-8fda-3881303e4e40/volume-subpaths/dns-svc/dnsmasq-dns/1 Jan 30 08:26:21 crc kubenswrapper[4870]: Output: mount: /var/lib/kubelet/pods/033dbc66-0baa-46b3-8fda-3881303e4e40/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.078788 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-j7lcp"] Jan 30 08:26:21 crc kubenswrapper[4870]: E0130 08:26:21.101972 4870 kubelet_pods.go:349] "Failed to prepare subPath for volumeMount of the container" err=< Jan 30 08:26:21 crc kubenswrapper[4870]: error mounting /var/lib/kubelet/pods/033dbc66-0baa-46b3-8fda-3881303e4e40/volumes/kubernetes.io~configmap/dns-svc/..2026_01_30_08_26_04.1033670092/dns-svc: mount failed: exit status 32 Jan 30 08:26:21 crc kubenswrapper[4870]: Mounting command: mount Jan 30 08:26:21 crc kubenswrapper[4870]: Mounting arguments: --no-canonicalize -o bind /proc/4870/fd/26 /var/lib/kubelet/pods/033dbc66-0baa-46b3-8fda-3881303e4e40/volume-subpaths/dns-svc/dnsmasq-dns/1 Jan 30 08:26:21 crc kubenswrapper[4870]: Output: mount: /var/lib/kubelet/pods/033dbc66-0baa-46b3-8fda-3881303e4e40/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Jan 30 08:26:21 crc kubenswrapper[4870]: > containerName="dnsmasq-dns" volumeMountName="dns-svc" Jan 30 08:26:21 crc kubenswrapper[4870]: W0130 08:26:21.102160 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7cb8c3d_2157_4c52_a196_24d514b098ee.slice/crio-166a34a595f3784fb50b62bea8c6b3aecbe48d2fb8d288bfed3951f4b3b662b9 WatchSource:0}: Error finding container 166a34a595f3784fb50b62bea8c6b3aecbe48d2fb8d288bfed3951f4b3b662b9: Status 404 returned error can't find the container with id 166a34a595f3784fb50b62bea8c6b3aecbe48d2fb8d288bfed3951f4b3b662b9 Jan 30 08:26:21 crc kubenswrapper[4870]: E0130 08:26:21.102139 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:dnsmasq-dns,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd26p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7d56d856cf-n69v7_openstack(033dbc66-0baa-46b3-8fda-3881303e4e40): CreateContainerConfigError: failed to prepare subPath for volumeMount \"dns-svc\" of container \"dnsmasq-dns\"" logger="UnhandledError" Jan 30 08:26:21 crc kubenswrapper[4870]: E0130 08:26:21.105553 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerConfigError: \"failed to prepare subPath for volumeMount \\\"dns-svc\\\" of container \\\"dnsmasq-dns\\\"\"" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.160056 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-bp9gj"] Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.182185 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.195648 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rwchz"] Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.211182 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 30 08:26:21 crc kubenswrapper[4870]: W0130 08:26:21.236857 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ab884a9_b47a_476a_8f89_140093b96527.slice/crio-b7b01500f518cd5372801924da9b7f6f7f843abe8b3213b1c7678796b6012613 WatchSource:0}: Error finding container b7b01500f518cd5372801924da9b7f6f7f843abe8b3213b1c7678796b6012613: Status 404 returned error can't find the container with id b7b01500f518cd5372801924da9b7f6f7f843abe8b3213b1c7678796b6012613 Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.267150 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:26:21 crc kubenswrapper[4870]: W0130 08:26:21.284234 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod625f2d84_6699_4e9f_881e_e96509760e9d.slice/crio-34c67fa453057204c29fb530e7edee04517cc65eea6c392dcd2f034238d625bf WatchSource:0}: Error finding container 34c67fa453057204c29fb530e7edee04517cc65eea6c392dcd2f034238d625bf: Status 404 returned error can't find the container with id 34c67fa453057204c29fb530e7edee04517cc65eea6c392dcd2f034238d625bf Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.554404 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.680043 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.680216 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-config\") pod \"c135f9a2-386b-4108-a40d-a703e4d72b13\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.680347 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjl4f\" (UniqueName: \"kubernetes.io/projected/c135f9a2-386b-4108-a40d-a703e4d72b13-kube-api-access-hjl4f\") pod \"c135f9a2-386b-4108-a40d-a703e4d72b13\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.680421 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-dns-svc\") pod \"c135f9a2-386b-4108-a40d-a703e4d72b13\" (UID: \"c135f9a2-386b-4108-a40d-a703e4d72b13\") " Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.681061 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c135f9a2-386b-4108-a40d-a703e4d72b13" (UID: "c135f9a2-386b-4108-a40d-a703e4d72b13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.681049 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-config" (OuterVolumeSpecName: "config") pod "c135f9a2-386b-4108-a40d-a703e4d72b13" (UID: "c135f9a2-386b-4108-a40d-a703e4d72b13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.685943 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c135f9a2-386b-4108-a40d-a703e4d72b13-kube-api-access-hjl4f" (OuterVolumeSpecName: "kube-api-access-hjl4f") pod "c135f9a2-386b-4108-a40d-a703e4d72b13" (UID: "c135f9a2-386b-4108-a40d-a703e4d72b13"). InnerVolumeSpecName "kube-api-access-hjl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.782252 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwpxs\" (UniqueName: \"kubernetes.io/projected/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-kube-api-access-bwpxs\") pod \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.782548 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-config\") pod \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\" (UID: \"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2\") " Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.783020 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.783300 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c135f9a2-386b-4108-a40d-a703e4d72b13-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.783311 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjl4f\" (UniqueName: \"kubernetes.io/projected/c135f9a2-386b-4108-a40d-a703e4d72b13-kube-api-access-hjl4f\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.783230 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-config" (OuterVolumeSpecName: "config") pod "7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2" (UID: "7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.786398 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-kube-api-access-bwpxs" (OuterVolumeSpecName: "kube-api-access-bwpxs") pod "7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2" (UID: "7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2"). InnerVolumeSpecName "kube-api-access-bwpxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.886637 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:21 crc kubenswrapper[4870]: I0130 08:26:21.887013 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwpxs\" (UniqueName: \"kubernetes.io/projected/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2-kube-api-access-bwpxs\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.025399 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz" event={"ID":"496b707b-8de6-4228-b4fd-a48f3709586c","Type":"ContainerStarted","Data":"2a49278dde0d40d9881be346be437fe8d7204dd6aa615f031f43d2552146f652"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.027572 4870 generic.go:334] "Generic (PLEG): container finished" podID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerID="f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc" exitCode=0 Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.027629 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" event={"ID":"a7cb8c3d-2157-4c52-a196-24d514b098ee","Type":"ContainerDied","Data":"f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.027648 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" event={"ID":"a7cb8c3d-2157-4c52-a196-24d514b098ee","Type":"ContainerStarted","Data":"166a34a595f3784fb50b62bea8c6b3aecbe48d2fb8d288bfed3951f4b3b662b9"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.030535 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerStarted","Data":"709385d651d4bcb103b7d7d0e2928451ab8b488203130eb1f7baa5322860b0f5"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.032016 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"2ab884a9-b47a-476a-8f89-140093b96527","Type":"ContainerStarted","Data":"b7b01500f518cd5372801924da9b7f6f7f843abe8b3213b1c7678796b6012613"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.034516 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.037339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd95798b9-btgvj" event={"ID":"c135f9a2-386b-4108-a40d-a703e4d72b13","Type":"ContainerDied","Data":"85ed67ebbca181d72c31532b0492443d898520cd3d920466a779ed625e90274a"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.086149 4870 generic.go:334] "Generic (PLEG): container finished" podID="f491adde-145d-44fc-9414-0fd92c41a114" containerID="e0bc981d691db0b6447c821e168e88197478822865e6c93bfc11c3c320369da7" exitCode=0 Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.137837 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.140465 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" event={"ID":"f491adde-145d-44fc-9414-0fd92c41a114","Type":"ContainerDied","Data":"e0bc981d691db0b6447c821e168e88197478822865e6c93bfc11c3c320369da7"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.140516 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" event={"ID":"f491adde-145d-44fc-9414-0fd92c41a114","Type":"ContainerStarted","Data":"86a61707384abe477056d320911123a063b0ba5f70255594ea4531c5ddc156a1"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.140532 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"625f2d84-6699-4e9f-881e-e96509760e9d","Type":"ContainerStarted","Data":"34c67fa453057204c29fb530e7edee04517cc65eea6c392dcd2f034238d625bf"} Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.140548 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd4f5bf5-n8lzz" event={"ID":"7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2","Type":"ContainerDied","Data":"954aad0a8aef2531b8fe421c82e8e2a545985a769437ed6265789109329f05be"} Jan 30 08:26:22 crc kubenswrapper[4870]: E0130 08:26:22.143504 4870 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7b3642e2a85c0a9325578b22f6081309876b1e1994b22fd2150d35aa4cf293fe/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7b3642e2a85c0a9325578b22f6081309876b1e1994b22fd2150d35aa4cf293fe/diff: no such file or directory, extraDiskErr: Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.153770 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gznh8"] Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.520298 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-n8lzz"] Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.535801 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9cd4f5bf5-n8lzz"] Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.552951 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-btgvj"] Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.565678 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dd95798b9-btgvj"] Jan 30 08:26:22 crc kubenswrapper[4870]: I0130 08:26:22.816561 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:26:24 crc kubenswrapper[4870]: I0130 08:26:24.092419 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2" path="/var/lib/kubelet/pods/7c2e9ac8-fed6-4f2e-9a1b-26e0b253a3d2/volumes" Jan 30 08:26:24 crc kubenswrapper[4870]: I0130 08:26:24.093279 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c135f9a2-386b-4108-a40d-a703e4d72b13" path="/var/lib/kubelet/pods/c135f9a2-386b-4108-a40d-a703e4d72b13/volumes" Jan 30 08:26:24 crc kubenswrapper[4870]: I0130 08:26:24.156095 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gznh8" event={"ID":"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2","Type":"ContainerStarted","Data":"156d64623cdcbda588e81e7d32516ad1ea6979270ad5c6324909bcb2aa418fc4"} Jan 30 08:26:24 crc kubenswrapper[4870]: W0130 08:26:24.526292 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9a5fd23_1240_4284_91cf_b57f4b2e3d02.slice/crio-d7ec8a87b0ed4e5b5483de9c1bc68ccfc75f940481071af5486f21894eb3aa9e WatchSource:0}: Error finding container d7ec8a87b0ed4e5b5483de9c1bc68ccfc75f940481071af5486f21894eb3aa9e: Status 404 returned error can't find the container with id d7ec8a87b0ed4e5b5483de9c1bc68ccfc75f940481071af5486f21894eb3aa9e Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.046977 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.165996 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e9a5fd23-1240-4284-91cf-b57f4b2e3d02","Type":"ContainerStarted","Data":"d7ec8a87b0ed4e5b5483de9c1bc68ccfc75f940481071af5486f21894eb3aa9e"} Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.168144 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" event={"ID":"f491adde-145d-44fc-9414-0fd92c41a114","Type":"ContainerDied","Data":"86a61707384abe477056d320911123a063b0ba5f70255594ea4531c5ddc156a1"} Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.168179 4870 scope.go:117] "RemoveContainer" containerID="e0bc981d691db0b6447c821e168e88197478822865e6c93bfc11c3c320369da7" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.168305 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64594fd94f-bp9gj" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.193754 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-config\") pod \"f491adde-145d-44fc-9414-0fd92c41a114\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.193809 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-dns-svc\") pod \"f491adde-145d-44fc-9414-0fd92c41a114\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.193888 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26wbn\" (UniqueName: \"kubernetes.io/projected/f491adde-145d-44fc-9414-0fd92c41a114-kube-api-access-26wbn\") pod \"f491adde-145d-44fc-9414-0fd92c41a114\" (UID: \"f491adde-145d-44fc-9414-0fd92c41a114\") " Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.198251 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f491adde-145d-44fc-9414-0fd92c41a114-kube-api-access-26wbn" (OuterVolumeSpecName: "kube-api-access-26wbn") pod "f491adde-145d-44fc-9414-0fd92c41a114" (UID: "f491adde-145d-44fc-9414-0fd92c41a114"). InnerVolumeSpecName "kube-api-access-26wbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.216503 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-config" (OuterVolumeSpecName: "config") pod "f491adde-145d-44fc-9414-0fd92c41a114" (UID: "f491adde-145d-44fc-9414-0fd92c41a114"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.233774 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f491adde-145d-44fc-9414-0fd92c41a114" (UID: "f491adde-145d-44fc-9414-0fd92c41a114"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.295856 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.295903 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26wbn\" (UniqueName: \"kubernetes.io/projected/f491adde-145d-44fc-9414-0fd92c41a114-kube-api-access-26wbn\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.295915 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f491adde-145d-44fc-9414-0fd92c41a114-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.525226 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-bp9gj"] Jan 30 08:26:25 crc kubenswrapper[4870]: I0130 08:26:25.525294 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64594fd94f-bp9gj"] Jan 30 08:26:26 crc kubenswrapper[4870]: I0130 08:26:26.090140 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f491adde-145d-44fc-9414-0fd92c41a114" path="/var/lib/kubelet/pods/f491adde-145d-44fc-9414-0fd92c41a114/volumes" Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.283332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d691b652-0077-4709-9e9d-16b87c8d3d3c","Type":"ContainerStarted","Data":"5e167b1036f951cd0167c7e6fa2feda24d5dd87ebdf0206166a3802c96ce12f9"} Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.283971 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.286064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" event={"ID":"033dbc66-0baa-46b3-8fda-3881303e4e40","Type":"ContainerStarted","Data":"a4110fcf56688524b599e752b9185dc87cb5234e1202ed239766375872ab4d4d"} Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.286277 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.288825 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" event={"ID":"a7cb8c3d-2157-4c52-a196-24d514b098ee","Type":"ContainerStarted","Data":"7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680"} Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.288995 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.316907 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.71989495 podStartE2EDuration="23.316870465s" podCreationTimestamp="2026-01-30 08:26:09 +0000 UTC" firstStartedPulling="2026-01-30 08:26:20.669896282 +0000 UTC m=+1019.365443391" lastFinishedPulling="2026-01-30 08:26:29.266871747 +0000 UTC m=+1027.962418906" observedRunningTime="2026-01-30 08:26:32.303411716 +0000 UTC m=+1030.998958835" watchObservedRunningTime="2026-01-30 08:26:32.316870465 +0000 UTC m=+1031.012417584" Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.332603 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" podStartSLOduration=27.25844405 podStartE2EDuration="28.332587324s" podCreationTimestamp="2026-01-30 08:26:04 +0000 UTC" firstStartedPulling="2026-01-30 08:26:19.142017419 +0000 UTC m=+1017.837564538" lastFinishedPulling="2026-01-30 08:26:20.216160693 +0000 UTC m=+1018.911707812" observedRunningTime="2026-01-30 08:26:32.3257053 +0000 UTC m=+1031.021252409" watchObservedRunningTime="2026-01-30 08:26:32.332587324 +0000 UTC m=+1031.028134443" Jan 30 08:26:32 crc kubenswrapper[4870]: I0130 08:26:32.360612 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" podStartSLOduration=28.360580305 podStartE2EDuration="28.360580305s" podCreationTimestamp="2026-01-30 08:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:26:32.353266328 +0000 UTC m=+1031.048813427" watchObservedRunningTime="2026-01-30 08:26:32.360580305 +0000 UTC m=+1031.056127724" Jan 30 08:26:33 crc kubenswrapper[4870]: I0130 08:26:33.300218 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gznh8" event={"ID":"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2","Type":"ContainerStarted","Data":"ef7c5e94c6e0e8c94d28032137e089b103eebf493c58abbb1a5ebd1b4dd0bb24"} Jan 30 08:26:33 crc kubenswrapper[4870]: I0130 08:26:33.303035 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"625f2d84-6699-4e9f-881e-e96509760e9d","Type":"ContainerStarted","Data":"a248299522e63e13e8b74efca6008e7af89425ea5527bd7ce8be41f68e3c1636"} Jan 30 08:26:33 crc kubenswrapper[4870]: I0130 08:26:33.306093 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31607550-5ccc-4b0b-9fbd-18007a61dcff","Type":"ContainerStarted","Data":"0a51df6c2a8c835be83788e6c0e9cc99339b4ef2dc9fce8dc1f0609f8b094b25"} Jan 30 08:26:33 crc kubenswrapper[4870]: I0130 08:26:33.308470 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e9a5fd23-1240-4284-91cf-b57f4b2e3d02","Type":"ContainerStarted","Data":"83821d5d143761eee4d4af3ed223fbc1ab521a17e0b508f8b9ca0a3c17569a8a"} Jan 30 08:26:33 crc kubenswrapper[4870]: I0130 08:26:33.312028 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a","Type":"ContainerStarted","Data":"36c6b3f0e330f4c5764ceb8dd30c047b28952964612889bae6f27160bd91c81f"} Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.323146 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"2ab884a9-b47a-476a-8f89-140093b96527","Type":"ContainerStarted","Data":"1990bb623e12d14af684a0ba5a125e7077393acb0eeb246cda1b7953fb41a71d"} Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.325247 4870 generic.go:334] "Generic (PLEG): container finished" podID="b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2" containerID="ef7c5e94c6e0e8c94d28032137e089b103eebf493c58abbb1a5ebd1b4dd0bb24" exitCode=0 Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.325282 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gznh8" event={"ID":"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2","Type":"ContainerDied","Data":"ef7c5e94c6e0e8c94d28032137e089b103eebf493c58abbb1a5ebd1b4dd0bb24"} Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.329824 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67","Type":"ContainerStarted","Data":"9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172"} Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.329941 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.332220 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz" event={"ID":"496b707b-8de6-4228-b4fd-a48f3709586c","Type":"ContainerStarted","Data":"8424b8b1a4215db2df55f4ae408c3652cafc92ccbffbd1933bcf3dc11b2b4320"} Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.384960 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rwchz" podStartSLOduration=10.345152088 podStartE2EDuration="20.384943708s" podCreationTimestamp="2026-01-30 08:26:14 +0000 UTC" firstStartedPulling="2026-01-30 08:26:21.225283365 +0000 UTC m=+1019.920830474" lastFinishedPulling="2026-01-30 08:26:31.265074985 +0000 UTC m=+1029.960622094" observedRunningTime="2026-01-30 08:26:34.377535647 +0000 UTC m=+1033.073082766" watchObservedRunningTime="2026-01-30 08:26:34.384943708 +0000 UTC m=+1033.080490827" Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.393473 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.979362329 podStartE2EDuration="24.393451212s" podCreationTimestamp="2026-01-30 08:26:10 +0000 UTC" firstStartedPulling="2026-01-30 08:26:20.6691696 +0000 UTC m=+1019.364716709" lastFinishedPulling="2026-01-30 08:26:33.083258483 +0000 UTC m=+1031.778805592" observedRunningTime="2026-01-30 08:26:34.393421601 +0000 UTC m=+1033.088968710" watchObservedRunningTime="2026-01-30 08:26:34.393451212 +0000 UTC m=+1033.088998331" Jan 30 08:26:34 crc kubenswrapper[4870]: I0130 08:26:34.780841 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rwchz" Jan 30 08:26:35 crc kubenswrapper[4870]: I0130 08:26:35.339822 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd","Type":"ContainerStarted","Data":"15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49"} Jan 30 08:26:35 crc kubenswrapper[4870]: I0130 08:26:35.344581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerStarted","Data":"431cfd92f94e1fd13cdf200e4b8c59047ac3e311acf24702741a42c672002d0e"} Jan 30 08:26:35 crc kubenswrapper[4870]: I0130 08:26:35.346380 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"97f21b9d-25bf-4a64-94ef-51d83b662ab3","Type":"ContainerStarted","Data":"55e6a4b3af15640088e3e1927ba88636a5cf35ec532fc2df3395e46ebcf07d79"} Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.357736 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gznh8" event={"ID":"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2","Type":"ContainerStarted","Data":"41df10a95d047b25ae7587f3ced3c928ce3c50893926dc89c0fd432d04195eba"} Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.358490 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gznh8" event={"ID":"b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2","Type":"ContainerStarted","Data":"1a859ee2531e71d18025357ed182dd3124bf293169ea74ac97fb7ddb2e9a18c9"} Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.359755 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"625f2d84-6699-4e9f-881e-e96509760e9d","Type":"ContainerStarted","Data":"9df60d1d62b2ce62252b4703486760ad04e0294a6b54e78f31b03d319f243872"} Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.362231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e9a5fd23-1240-4284-91cf-b57f4b2e3d02","Type":"ContainerStarted","Data":"7c453dde5d362e21f3144c4b39f706dbcf9d1c3d8699d071ad0f7f13ba2c0d74"} Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.421653 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.109272934 podStartE2EDuration="19.421633894s" podCreationTimestamp="2026-01-30 08:26:17 +0000 UTC" firstStartedPulling="2026-01-30 08:26:24.536088698 +0000 UTC m=+1023.231635857" lastFinishedPulling="2026-01-30 08:26:35.848449698 +0000 UTC m=+1034.543996817" observedRunningTime="2026-01-30 08:26:36.419129056 +0000 UTC m=+1035.114676175" watchObservedRunningTime="2026-01-30 08:26:36.421633894 +0000 UTC m=+1035.117181003" Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.825968 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:36 crc kubenswrapper[4870]: I0130 08:26:36.870576 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:37 crc kubenswrapper[4870]: I0130 08:26:37.371583 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:37 crc kubenswrapper[4870]: I0130 08:26:37.402444 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gznh8" podStartSLOduration=15.63691538 podStartE2EDuration="23.402424833s" podCreationTimestamp="2026-01-30 08:26:14 +0000 UTC" firstStartedPulling="2026-01-30 08:26:23.141006606 +0000 UTC m=+1021.836553715" lastFinishedPulling="2026-01-30 08:26:30.906516019 +0000 UTC m=+1029.602063168" observedRunningTime="2026-01-30 08:26:37.3942814 +0000 UTC m=+1036.089828499" watchObservedRunningTime="2026-01-30 08:26:37.402424833 +0000 UTC m=+1036.097971942" Jan 30 08:26:37 crc kubenswrapper[4870]: I0130 08:26:37.424995 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.879714566 podStartE2EDuration="23.424979225s" podCreationTimestamp="2026-01-30 08:26:14 +0000 UTC" firstStartedPulling="2026-01-30 08:26:21.298044159 +0000 UTC m=+1019.993591268" lastFinishedPulling="2026-01-30 08:26:35.843308818 +0000 UTC m=+1034.538855927" observedRunningTime="2026-01-30 08:26:37.416981286 +0000 UTC m=+1036.112528405" watchObservedRunningTime="2026-01-30 08:26:37.424979225 +0000 UTC m=+1036.120526334" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.416798 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.679229 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-j7lcp"] Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.679698 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerName="dnsmasq-dns" containerID="cri-o://7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680" gracePeriod=10 Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.689145 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.711985 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-lg2gz"] Jan 30 08:26:38 crc kubenswrapper[4870]: E0130 08:26:38.712416 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f491adde-145d-44fc-9414-0fd92c41a114" containerName="init" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.712436 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f491adde-145d-44fc-9414-0fd92c41a114" containerName="init" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.712650 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f491adde-145d-44fc-9414-0fd92c41a114" containerName="init" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.713764 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.717114 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.727551 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-lg2gz"] Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.855808 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-56vf8"] Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.857710 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.860160 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-config\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.860277 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz569\" (UniqueName: \"kubernetes.io/projected/a2d4c089-1da0-424e-9f99-008407498c84-kube-api-access-sz569\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.860311 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-dns-svc\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.860347 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-ovsdbserver-nb\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.860427 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-56vf8"] Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.864846 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.962681 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eaa9048d-8c54-4054-87d1-69c6746c1479-ovn-rundir\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.962780 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz569\" (UniqueName: \"kubernetes.io/projected/a2d4c089-1da0-424e-9f99-008407498c84-kube-api-access-sz569\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.962839 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-dns-svc\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.962890 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4rd\" (UniqueName: \"kubernetes.io/projected/eaa9048d-8c54-4054-87d1-69c6746c1479-kube-api-access-xc4rd\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.962938 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-ovsdbserver-nb\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.962969 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa9048d-8c54-4054-87d1-69c6746c1479-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.963020 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-config\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.963050 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eaa9048d-8c54-4054-87d1-69c6746c1479-ovs-rundir\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.963076 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa9048d-8c54-4054-87d1-69c6746c1479-combined-ca-bundle\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.963110 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa9048d-8c54-4054-87d1-69c6746c1479-config\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.964500 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-dns-svc\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.964500 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-config\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.964766 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-ovsdbserver-nb\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:38 crc kubenswrapper[4870]: I0130 08:26:38.991581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz569\" (UniqueName: \"kubernetes.io/projected/a2d4c089-1da0-424e-9f99-008407498c84-kube-api-access-sz569\") pod \"dnsmasq-dns-57b46657c9-lg2gz\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.018998 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-n69v7"] Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.019452 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="dnsmasq-dns" containerID="cri-o://a4110fcf56688524b599e752b9185dc87cb5234e1202ed239766375872ab4d4d" gracePeriod=10 Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.028709 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.035344 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-qxdtt"] Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.036586 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.042898 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.054298 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-qxdtt"] Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065055 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eaa9048d-8c54-4054-87d1-69c6746c1479-ovn-rundir\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065129 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc4rd\" (UniqueName: \"kubernetes.io/projected/eaa9048d-8c54-4054-87d1-69c6746c1479-kube-api-access-xc4rd\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065172 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa9048d-8c54-4054-87d1-69c6746c1479-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065212 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eaa9048d-8c54-4054-87d1-69c6746c1479-ovs-rundir\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065232 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa9048d-8c54-4054-87d1-69c6746c1479-combined-ca-bundle\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065254 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa9048d-8c54-4054-87d1-69c6746c1479-config\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065418 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eaa9048d-8c54-4054-87d1-69c6746c1479-ovn-rundir\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065868 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eaa9048d-8c54-4054-87d1-69c6746c1479-ovs-rundir\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.065985 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa9048d-8c54-4054-87d1-69c6746c1479-config\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.081337 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa9048d-8c54-4054-87d1-69c6746c1479-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.084599 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.085675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc4rd\" (UniqueName: \"kubernetes.io/projected/eaa9048d-8c54-4054-87d1-69c6746c1479-kube-api-access-xc4rd\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.107586 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa9048d-8c54-4054-87d1-69c6746c1479-combined-ca-bundle\") pod \"ovn-controller-metrics-56vf8\" (UID: \"eaa9048d-8c54-4054-87d1-69c6746c1479\") " pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.169052 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-dns-svc\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.169138 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-config\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.169162 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqm6\" (UniqueName: \"kubernetes.io/projected/c06e0509-685b-4010-9aef-1388bc28248d-kube-api-access-smqm6\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.169217 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-sb\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.169246 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-nb\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.182981 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-56vf8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.250891 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.272342 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-dns-svc\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.272839 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-config\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.272885 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqm6\" (UniqueName: \"kubernetes.io/projected/c06e0509-685b-4010-9aef-1388bc28248d-kube-api-access-smqm6\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.272930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-sb\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.272960 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-nb\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.273959 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-nb\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.274532 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-config\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.275922 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-sb\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.276061 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-dns-svc\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.293252 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqm6\" (UniqueName: \"kubernetes.io/projected/c06e0509-685b-4010-9aef-1388bc28248d-kube-api-access-smqm6\") pod \"dnsmasq-dns-5899d7d557-qxdtt\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.359984 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.371465 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.374005 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-dns-svc\") pod \"a7cb8c3d-2157-4c52-a196-24d514b098ee\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.374191 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-config\") pod \"a7cb8c3d-2157-4c52-a196-24d514b098ee\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.374340 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pkdd\" (UniqueName: \"kubernetes.io/projected/a7cb8c3d-2157-4c52-a196-24d514b098ee-kube-api-access-6pkdd\") pod \"a7cb8c3d-2157-4c52-a196-24d514b098ee\" (UID: \"a7cb8c3d-2157-4c52-a196-24d514b098ee\") " Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.386101 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cb8c3d-2157-4c52-a196-24d514b098ee-kube-api-access-6pkdd" (OuterVolumeSpecName: "kube-api-access-6pkdd") pod "a7cb8c3d-2157-4c52-a196-24d514b098ee" (UID: "a7cb8c3d-2157-4c52-a196-24d514b098ee"). InnerVolumeSpecName "kube-api-access-6pkdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.400480 4870 generic.go:334] "Generic (PLEG): container finished" podID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerID="a4110fcf56688524b599e752b9185dc87cb5234e1202ed239766375872ab4d4d" exitCode=0 Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.400550 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" event={"ID":"033dbc66-0baa-46b3-8fda-3881303e4e40","Type":"ContainerDied","Data":"a4110fcf56688524b599e752b9185dc87cb5234e1202ed239766375872ab4d4d"} Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.402530 4870 generic.go:334] "Generic (PLEG): container finished" podID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerID="7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680" exitCode=0 Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.403497 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.403979 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" event={"ID":"a7cb8c3d-2157-4c52-a196-24d514b098ee","Type":"ContainerDied","Data":"7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680"} Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.404063 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57467f675c-j7lcp" event={"ID":"a7cb8c3d-2157-4c52-a196-24d514b098ee","Type":"ContainerDied","Data":"166a34a595f3784fb50b62bea8c6b3aecbe48d2fb8d288bfed3951f4b3b662b9"} Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.404160 4870 scope.go:117] "RemoveContainer" containerID="7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.439066 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7cb8c3d-2157-4c52-a196-24d514b098ee" (UID: "a7cb8c3d-2157-4c52-a196-24d514b098ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.441295 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-config" (OuterVolumeSpecName: "config") pod "a7cb8c3d-2157-4c52-a196-24d514b098ee" (UID: "a7cb8c3d-2157-4c52-a196-24d514b098ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.445370 4870 scope.go:117] "RemoveContainer" containerID="f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.465832 4870 scope.go:117] "RemoveContainer" containerID="7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680" Jan 30 08:26:39 crc kubenswrapper[4870]: E0130 08:26:39.469520 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680\": container with ID starting with 7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680 not found: ID does not exist" containerID="7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.469563 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680"} err="failed to get container status \"7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680\": rpc error: code = NotFound desc = could not find container \"7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680\": container with ID starting with 7126b51eddb9f7175bdeb0403c83717f365ff87bb0487d366795485ba9910680 not found: ID does not exist" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.469592 4870 scope.go:117] "RemoveContainer" containerID="f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc" Jan 30 08:26:39 crc kubenswrapper[4870]: E0130 08:26:39.470110 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc\": container with ID starting with f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc not found: ID does not exist" containerID="f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.470141 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc"} err="failed to get container status \"f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc\": rpc error: code = NotFound desc = could not find container \"f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc\": container with ID starting with f169d9b6f52360d0a659533116b73054218a299b89ad6a74d7b9d17f3b0818bc not found: ID does not exist" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.477370 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pkdd\" (UniqueName: \"kubernetes.io/projected/a7cb8c3d-2157-4c52-a196-24d514b098ee-kube-api-access-6pkdd\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.477395 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.477404 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7cb8c3d-2157-4c52-a196-24d514b098ee-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:39 crc kubenswrapper[4870]: W0130 08:26:39.650571 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d4c089_1da0_424e_9f99_008407498c84.slice/crio-b29df059ce6bb06a2e19ec34d7aab1136851e890c03a603fea3383c675972b5a WatchSource:0}: Error finding container b29df059ce6bb06a2e19ec34d7aab1136851e890c03a603fea3383c675972b5a: Status 404 returned error can't find the container with id b29df059ce6bb06a2e19ec34d7aab1136851e890c03a603fea3383c675972b5a Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.651777 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-lg2gz"] Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.686844 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.747018 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-j7lcp"] Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.752733 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.754639 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57467f675c-j7lcp"] Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.768330 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-56vf8"] Jan 30 08:26:39 crc kubenswrapper[4870]: W0130 08:26:39.780721 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa9048d_8c54_4054_87d1_69c6746c1479.slice/crio-0f77ed04ed539efc5772e6e82c1708b77d0487083193039b6ffd7c3e093cad9f WatchSource:0}: Error finding container 0f77ed04ed539efc5772e6e82c1708b77d0487083193039b6ffd7c3e093cad9f: Status 404 returned error can't find the container with id 0f77ed04ed539efc5772e6e82c1708b77d0487083193039b6ffd7c3e093cad9f Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.792982 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.793023 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:26:39 crc kubenswrapper[4870]: I0130 08:26:39.883796 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-qxdtt"] Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.083260 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" path="/var/lib/kubelet/pods/a7cb8c3d-2157-4c52-a196-24d514b098ee/volumes" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.173264 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.299690 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-config\") pod \"033dbc66-0baa-46b3-8fda-3881303e4e40\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.299784 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd26p\" (UniqueName: \"kubernetes.io/projected/033dbc66-0baa-46b3-8fda-3881303e4e40-kube-api-access-zd26p\") pod \"033dbc66-0baa-46b3-8fda-3881303e4e40\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.299855 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-dns-svc\") pod \"033dbc66-0baa-46b3-8fda-3881303e4e40\" (UID: \"033dbc66-0baa-46b3-8fda-3881303e4e40\") " Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.305229 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033dbc66-0baa-46b3-8fda-3881303e4e40-kube-api-access-zd26p" (OuterVolumeSpecName: "kube-api-access-zd26p") pod "033dbc66-0baa-46b3-8fda-3881303e4e40" (UID: "033dbc66-0baa-46b3-8fda-3881303e4e40"). InnerVolumeSpecName "kube-api-access-zd26p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.371839 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-config" (OuterVolumeSpecName: "config") pod "033dbc66-0baa-46b3-8fda-3881303e4e40" (UID: "033dbc66-0baa-46b3-8fda-3881303e4e40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.376481 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "033dbc66-0baa-46b3-8fda-3881303e4e40" (UID: "033dbc66-0baa-46b3-8fda-3881303e4e40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.401698 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.401727 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd26p\" (UniqueName: \"kubernetes.io/projected/033dbc66-0baa-46b3-8fda-3881303e4e40-kube-api-access-zd26p\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.401738 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/033dbc66-0baa-46b3-8fda-3881303e4e40-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.412516 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" event={"ID":"c06e0509-685b-4010-9aef-1388bc28248d","Type":"ContainerStarted","Data":"88033e8d4c732a95cee343d71511e6c36bd6bfa8fe952134c2f0152ffc8ba5b1"} Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.413481 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-56vf8" event={"ID":"eaa9048d-8c54-4054-87d1-69c6746c1479","Type":"ContainerStarted","Data":"0f77ed04ed539efc5772e6e82c1708b77d0487083193039b6ffd7c3e093cad9f"} Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.414892 4870 generic.go:334] "Generic (PLEG): container finished" podID="a2d4c089-1da0-424e-9f99-008407498c84" containerID="59e1a6d7a27f28bf32baea3e14c93e926f69980f3fb15bd86a63262f4100c81d" exitCode=0 Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.414958 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" event={"ID":"a2d4c089-1da0-424e-9f99-008407498c84","Type":"ContainerDied","Data":"59e1a6d7a27f28bf32baea3e14c93e926f69980f3fb15bd86a63262f4100c81d"} Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.414984 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" event={"ID":"a2d4c089-1da0-424e-9f99-008407498c84","Type":"ContainerStarted","Data":"b29df059ce6bb06a2e19ec34d7aab1136851e890c03a603fea3383c675972b5a"} Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.417603 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.420411 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" event={"ID":"033dbc66-0baa-46b3-8fda-3881303e4e40","Type":"ContainerDied","Data":"be7d1aa3f3670be6d94e4a7facc9d6deaa6f49ecc11c230b4a15c8f46a3117b6"} Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.420480 4870 scope.go:117] "RemoveContainer" containerID="a4110fcf56688524b599e752b9185dc87cb5234e1202ed239766375872ab4d4d" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.424494 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.472283 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-n69v7"] Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.483571 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d56d856cf-n69v7"] Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.748167 4870 scope.go:117] "RemoveContainer" containerID="fa2967a57d46eadef06dee8c0ac950d7fe80b8807314b92bceff366e74a2aaa8" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.780861 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.971833 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:26:40 crc kubenswrapper[4870]: E0130 08:26:40.972350 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerName="dnsmasq-dns" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.972368 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerName="dnsmasq-dns" Jan 30 08:26:40 crc kubenswrapper[4870]: E0130 08:26:40.972421 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="init" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.972428 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="init" Jan 30 08:26:40 crc kubenswrapper[4870]: E0130 08:26:40.972443 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="dnsmasq-dns" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.972451 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="dnsmasq-dns" Jan 30 08:26:40 crc kubenswrapper[4870]: E0130 08:26:40.972464 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerName="init" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.972471 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerName="init" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.972665 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="dnsmasq-dns" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.972686 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cb8c3d-2157-4c52-a196-24d514b098ee" containerName="dnsmasq-dns" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.974063 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.983456 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.983494 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7z24b" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.983760 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.983904 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 08:26:40 crc kubenswrapper[4870]: I0130 08:26:40.996618 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.123184 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.136952 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.137004 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d69aef12-ac48-41f7-8a14-a561edab0ae7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.137047 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.137085 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgpx\" (UniqueName: \"kubernetes.io/projected/d69aef12-ac48-41f7-8a14-a561edab0ae7-kube-api-access-bsgpx\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.137120 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69aef12-ac48-41f7-8a14-a561edab0ae7-config\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.137151 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.137179 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69aef12-ac48-41f7-8a14-a561edab0ae7-scripts\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.202851 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-lg2gz"] Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.236306 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-r6b9m"] Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.237597 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.238211 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.238243 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d69aef12-ac48-41f7-8a14-a561edab0ae7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.238279 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.238331 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgpx\" (UniqueName: \"kubernetes.io/projected/d69aef12-ac48-41f7-8a14-a561edab0ae7-kube-api-access-bsgpx\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.238478 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69aef12-ac48-41f7-8a14-a561edab0ae7-config\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.238790 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d69aef12-ac48-41f7-8a14-a561edab0ae7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.239184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.239296 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69aef12-ac48-41f7-8a14-a561edab0ae7-scripts\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.239573 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69aef12-ac48-41f7-8a14-a561edab0ae7-config\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.239967 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d69aef12-ac48-41f7-8a14-a561edab0ae7-scripts\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.244035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.245215 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.254500 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d69aef12-ac48-41f7-8a14-a561edab0ae7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.257836 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-r6b9m"] Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.271824 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgpx\" (UniqueName: \"kubernetes.io/projected/d69aef12-ac48-41f7-8a14-a561edab0ae7-kube-api-access-bsgpx\") pod \"ovn-northd-0\" (UID: \"d69aef12-ac48-41f7-8a14-a561edab0ae7\") " pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.333218 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.340642 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-config\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.340733 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx8qz\" (UniqueName: \"kubernetes.io/projected/8cd19c31-4252-4de7-a673-9da7aedcb785-kube-api-access-cx8qz\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.340786 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-nb\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.340820 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-dns-svc\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.340835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.441924 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-nb\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.442194 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-dns-svc\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.442217 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.442290 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-config\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.442338 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx8qz\" (UniqueName: \"kubernetes.io/projected/8cd19c31-4252-4de7-a673-9da7aedcb785-kube-api-access-cx8qz\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.443023 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-nb\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.443295 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-dns-svc\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.447978 4870 generic.go:334] "Generic (PLEG): container finished" podID="c06e0509-685b-4010-9aef-1388bc28248d" containerID="38cec7542121e32b4b05300244ea351b4cc373c4ddccf22126a36f614eae49e4" exitCode=0 Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.448031 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" event={"ID":"c06e0509-685b-4010-9aef-1388bc28248d","Type":"ContainerDied","Data":"38cec7542121e32b4b05300244ea351b4cc373c4ddccf22126a36f614eae49e4"} Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.451210 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-sb\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.458134 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-56vf8" event={"ID":"eaa9048d-8c54-4054-87d1-69c6746c1479","Type":"ContainerStarted","Data":"e05e8c7168dd6280ee42c06df96bc58337340363872a03081ffb828299776621"} Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.465185 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-config\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.467482 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx8qz\" (UniqueName: \"kubernetes.io/projected/8cd19c31-4252-4de7-a673-9da7aedcb785-kube-api-access-cx8qz\") pod \"dnsmasq-dns-65cc6fcf45-r6b9m\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.467529 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" event={"ID":"a2d4c089-1da0-424e-9f99-008407498c84","Type":"ContainerStarted","Data":"08a90b3a0b430b6fe52c0e915f7af8b01113438944cfd79325217422f6e2a114"} Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.467560 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.501743 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-56vf8" podStartSLOduration=3.501704036 podStartE2EDuration="3.501704036s" podCreationTimestamp="2026-01-30 08:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:26:41.492551174 +0000 UTC m=+1040.188098283" watchObservedRunningTime="2026-01-30 08:26:41.501704036 +0000 UTC m=+1040.197251145" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.520744 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" podStartSLOduration=3.520726629 podStartE2EDuration="3.520726629s" podCreationTimestamp="2026-01-30 08:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:26:41.52009674 +0000 UTC m=+1040.215643849" watchObservedRunningTime="2026-01-30 08:26:41.520726629 +0000 UTC m=+1040.216273738" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.619283 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:41 crc kubenswrapper[4870]: E0130 08:26:41.668707 4870 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 08:26:41 crc kubenswrapper[4870]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/c06e0509-685b-4010-9aef-1388bc28248d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 08:26:41 crc kubenswrapper[4870]: > podSandboxID="88033e8d4c732a95cee343d71511e6c36bd6bfa8fe952134c2f0152ffc8ba5b1" Jan 30 08:26:41 crc kubenswrapper[4870]: E0130 08:26:41.669234 4870 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 08:26:41 crc kubenswrapper[4870]: container &Container{Name:dnsmasq-dns,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7bh64fh67ch5c4h65bh587h67fh546h7bhc4h688h596h5c7h554h99h8h5dch586h7h5cbh686h55h64bh7dhdbhb6h575h65ch654h658h688h65bq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smqm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5899d7d557-qxdtt_openstack(c06e0509-685b-4010-9aef-1388bc28248d): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/c06e0509-685b-4010-9aef-1388bc28248d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 08:26:41 crc kubenswrapper[4870]: > logger="UnhandledError" Jan 30 08:26:41 crc kubenswrapper[4870]: E0130 08:26:41.670416 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/c06e0509-685b-4010-9aef-1388bc28248d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" podUID="c06e0509-685b-4010-9aef-1388bc28248d" Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.817686 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:26:41 crc kubenswrapper[4870]: W0130 08:26:41.825865 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69aef12_ac48_41f7_8a14_a561edab0ae7.slice/crio-97e7f5b8cfce6c8c68f3754b85f177c27702b92922b9f413ad7d73de0c8a4624 WatchSource:0}: Error finding container 97e7f5b8cfce6c8c68f3754b85f177c27702b92922b9f413ad7d73de0c8a4624: Status 404 returned error can't find the container with id 97e7f5b8cfce6c8c68f3754b85f177c27702b92922b9f413ad7d73de0c8a4624 Jan 30 08:26:41 crc kubenswrapper[4870]: I0130 08:26:41.910844 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-r6b9m"] Jan 30 08:26:41 crc kubenswrapper[4870]: W0130 08:26:41.948120 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cd19c31_4252_4de7_a673_9da7aedcb785.slice/crio-374b146ad8265eb6041ff5f1143dd86432961e72a020212acd84189e8d8f2978 WatchSource:0}: Error finding container 374b146ad8265eb6041ff5f1143dd86432961e72a020212acd84189e8d8f2978: Status 404 returned error can't find the container with id 374b146ad8265eb6041ff5f1143dd86432961e72a020212acd84189e8d8f2978 Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.094018 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" path="/var/lib/kubelet/pods/033dbc66-0baa-46b3-8fda-3881303e4e40/volumes" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.350475 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.355668 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.361775 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.361792 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.362021 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.364071 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-jpqmh" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.376544 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.463411 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxkx\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-kube-api-access-zrxkx\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.463547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.463592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/46634e41-7d5b-4181-b824-716bb37fca47-cache\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.463625 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.463706 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46634e41-7d5b-4181-b824-716bb37fca47-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.463739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/46634e41-7d5b-4181-b824-716bb37fca47-lock\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.475634 4870 generic.go:334] "Generic (PLEG): container finished" podID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerID="431cfd92f94e1fd13cdf200e4b8c59047ac3e311acf24702741a42c672002d0e" exitCode=0 Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.475683 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerDied","Data":"431cfd92f94e1fd13cdf200e4b8c59047ac3e311acf24702741a42c672002d0e"} Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.482094 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d69aef12-ac48-41f7-8a14-a561edab0ae7","Type":"ContainerStarted","Data":"97e7f5b8cfce6c8c68f3754b85f177c27702b92922b9f413ad7d73de0c8a4624"} Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.485037 4870 generic.go:334] "Generic (PLEG): container finished" podID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerID="fa0e8e29630ac45ae5392bdda60293a38298eb7a8fb05baa4e216154fe19f932" exitCode=0 Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.485153 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" event={"ID":"8cd19c31-4252-4de7-a673-9da7aedcb785","Type":"ContainerDied","Data":"fa0e8e29630ac45ae5392bdda60293a38298eb7a8fb05baa4e216154fe19f932"} Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.485234 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" event={"ID":"8cd19c31-4252-4de7-a673-9da7aedcb785","Type":"ContainerStarted","Data":"374b146ad8265eb6041ff5f1143dd86432961e72a020212acd84189e8d8f2978"} Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.485965 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" podUID="a2d4c089-1da0-424e-9f99-008407498c84" containerName="dnsmasq-dns" containerID="cri-o://08a90b3a0b430b6fe52c0e915f7af8b01113438944cfd79325217422f6e2a114" gracePeriod=10 Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.565964 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46634e41-7d5b-4181-b824-716bb37fca47-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.566016 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/46634e41-7d5b-4181-b824-716bb37fca47-lock\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.566149 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrxkx\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-kube-api-access-zrxkx\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.566216 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.566280 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/46634e41-7d5b-4181-b824-716bb37fca47-cache\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.566320 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.568183 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: E0130 08:26:42.570426 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:26:42 crc kubenswrapper[4870]: E0130 08:26:42.570442 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:26:42 crc kubenswrapper[4870]: E0130 08:26:42.570493 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift podName:46634e41-7d5b-4181-b824-716bb37fca47 nodeName:}" failed. No retries permitted until 2026-01-30 08:26:43.070467153 +0000 UTC m=+1041.766014262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift") pod "swift-storage-0" (UID: "46634e41-7d5b-4181-b824-716bb37fca47") : configmap "swift-ring-files" not found Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.572495 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/46634e41-7d5b-4181-b824-716bb37fca47-lock\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.572743 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/46634e41-7d5b-4181-b824-716bb37fca47-cache\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.580662 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46634e41-7d5b-4181-b824-716bb37fca47-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.588364 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrxkx\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-kube-api-access-zrxkx\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.589565 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.875050 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dlnvg"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.876309 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.882612 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.882643 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.884055 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.914742 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dlnvg"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.921092 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dlnvg"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.926934 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gkrl7"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.928039 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: E0130 08:26:42.930055 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-7qfqw ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-7qfqw ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-dlnvg" podUID="8ccf52cf-97d4-4b27-8305-24222e79cc73" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.947651 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gkrl7"] Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.973989 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfqw\" (UniqueName: \"kubernetes.io/projected/8ccf52cf-97d4-4b27-8305-24222e79cc73-kube-api-access-7qfqw\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974049 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8ccf52cf-97d4-4b27-8305-24222e79cc73-etc-swift\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974120 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-combined-ca-bundle\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974145 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-scripts\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974184 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-scripts\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974213 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-dispersionconf\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974303 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-combined-ca-bundle\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974431 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-dispersionconf\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974472 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-ring-data-devices\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974610 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-swiftconf\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974646 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-ring-data-devices\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974694 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4406e732-41a8-48a1-954a-6dbe4483a79a-etc-swift\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.974776 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6z9k\" (UniqueName: \"kubernetes.io/projected/4406e732-41a8-48a1-954a-6dbe4483a79a-kube-api-access-d6z9k\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:42 crc kubenswrapper[4870]: I0130 08:26:42.976094 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-swiftconf\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.077864 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-dispersionconf\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.077938 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-ring-data-devices\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078006 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-swiftconf\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078033 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-ring-data-devices\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078066 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4406e732-41a8-48a1-954a-6dbe4483a79a-etc-swift\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078094 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6z9k\" (UniqueName: \"kubernetes.io/projected/4406e732-41a8-48a1-954a-6dbe4483a79a-kube-api-access-d6z9k\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078137 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078165 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-swiftconf\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078191 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfqw\" (UniqueName: \"kubernetes.io/projected/8ccf52cf-97d4-4b27-8305-24222e79cc73-kube-api-access-7qfqw\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8ccf52cf-97d4-4b27-8305-24222e79cc73-etc-swift\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078236 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-combined-ca-bundle\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078259 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-scripts\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078311 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-scripts\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078363 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-dispersionconf\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.078394 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-combined-ca-bundle\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: E0130 08:26:43.078517 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:26:43 crc kubenswrapper[4870]: E0130 08:26:43.078550 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:26:43 crc kubenswrapper[4870]: E0130 08:26:43.078624 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift podName:46634e41-7d5b-4181-b824-716bb37fca47 nodeName:}" failed. No retries permitted until 2026-01-30 08:26:44.078601826 +0000 UTC m=+1042.774148945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift") pod "swift-storage-0" (UID: "46634e41-7d5b-4181-b824-716bb37fca47") : configmap "swift-ring-files" not found Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.079703 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4406e732-41a8-48a1-954a-6dbe4483a79a-etc-swift\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.079968 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-ring-data-devices\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.080653 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8ccf52cf-97d4-4b27-8305-24222e79cc73-etc-swift\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.081712 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-dispersionconf\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.082791 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-scripts\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.083725 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-combined-ca-bundle\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.083792 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-combined-ca-bundle\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.084417 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-scripts\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.084490 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-ring-data-devices\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.085486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-swiftconf\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.085553 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-swiftconf\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.089262 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-dispersionconf\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.107066 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfqw\" (UniqueName: \"kubernetes.io/projected/8ccf52cf-97d4-4b27-8305-24222e79cc73-kube-api-access-7qfqw\") pod \"swift-ring-rebalance-dlnvg\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.111177 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6z9k\" (UniqueName: \"kubernetes.io/projected/4406e732-41a8-48a1-954a-6dbe4483a79a-kube-api-access-d6z9k\") pod \"swift-ring-rebalance-gkrl7\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.247601 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.495724 4870 generic.go:334] "Generic (PLEG): container finished" podID="a2d4c089-1da0-424e-9f99-008407498c84" containerID="08a90b3a0b430b6fe52c0e915f7af8b01113438944cfd79325217422f6e2a114" exitCode=0 Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.496180 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.495925 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" event={"ID":"a2d4c089-1da0-424e-9f99-008407498c84","Type":"ContainerDied","Data":"08a90b3a0b430b6fe52c0e915f7af8b01113438944cfd79325217422f6e2a114"} Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.506628 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.584767 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-combined-ca-bundle\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.584818 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-ring-data-devices\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.584839 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-dispersionconf\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.584867 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-scripts\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.585010 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8ccf52cf-97d4-4b27-8305-24222e79cc73-etc-swift\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.585064 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-swiftconf\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.585084 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qfqw\" (UniqueName: \"kubernetes.io/projected/8ccf52cf-97d4-4b27-8305-24222e79cc73-kube-api-access-7qfqw\") pod \"8ccf52cf-97d4-4b27-8305-24222e79cc73\" (UID: \"8ccf52cf-97d4-4b27-8305-24222e79cc73\") " Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.585405 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.585751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccf52cf-97d4-4b27-8305-24222e79cc73-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.585864 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-scripts" (OuterVolumeSpecName: "scripts") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.589344 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.589759 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.589958 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccf52cf-97d4-4b27-8305-24222e79cc73-kube-api-access-7qfqw" (OuterVolumeSpecName: "kube-api-access-7qfqw") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "kube-api-access-7qfqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.590378 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8ccf52cf-97d4-4b27-8305-24222e79cc73" (UID: "8ccf52cf-97d4-4b27-8305-24222e79cc73"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687456 4870 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8ccf52cf-97d4-4b27-8305-24222e79cc73-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687821 4870 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687838 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qfqw\" (UniqueName: \"kubernetes.io/projected/8ccf52cf-97d4-4b27-8305-24222e79cc73-kube-api-access-7qfqw\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687858 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687900 4870 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687916 4870 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8ccf52cf-97d4-4b27-8305-24222e79cc73-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.687941 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ccf52cf-97d4-4b27-8305-24222e79cc73-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:43 crc kubenswrapper[4870]: I0130 08:26:43.787586 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gkrl7"] Jan 30 08:26:43 crc kubenswrapper[4870]: W0130 08:26:43.983731 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4406e732_41a8_48a1_954a_6dbe4483a79a.slice/crio-c2c9f5c52e6901456bac3bbffbdb6a069ed3e92d4e7f13f97a8b660b6688ff9d WatchSource:0}: Error finding container c2c9f5c52e6901456bac3bbffbdb6a069ed3e92d4e7f13f97a8b660b6688ff9d: Status 404 returned error can't find the container with id c2c9f5c52e6901456bac3bbffbdb6a069ed3e92d4e7f13f97a8b660b6688ff9d Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.096951 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:44 crc kubenswrapper[4870]: E0130 08:26:44.097115 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:26:44 crc kubenswrapper[4870]: E0130 08:26:44.097127 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:26:44 crc kubenswrapper[4870]: E0130 08:26:44.097169 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift podName:46634e41-7d5b-4181-b824-716bb37fca47 nodeName:}" failed. No retries permitted until 2026-01-30 08:26:46.097155527 +0000 UTC m=+1044.792702636 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift") pod "swift-storage-0" (UID: "46634e41-7d5b-4181-b824-716bb37fca47") : configmap "swift-ring-files" not found Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.153159 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.198127 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz569\" (UniqueName: \"kubernetes.io/projected/a2d4c089-1da0-424e-9f99-008407498c84-kube-api-access-sz569\") pod \"a2d4c089-1da0-424e-9f99-008407498c84\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.198415 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-dns-svc\") pod \"a2d4c089-1da0-424e-9f99-008407498c84\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.198484 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-config\") pod \"a2d4c089-1da0-424e-9f99-008407498c84\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.198583 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-ovsdbserver-nb\") pod \"a2d4c089-1da0-424e-9f99-008407498c84\" (UID: \"a2d4c089-1da0-424e-9f99-008407498c84\") " Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.203187 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d4c089-1da0-424e-9f99-008407498c84-kube-api-access-sz569" (OuterVolumeSpecName: "kube-api-access-sz569") pod "a2d4c089-1da0-424e-9f99-008407498c84" (UID: "a2d4c089-1da0-424e-9f99-008407498c84"). InnerVolumeSpecName "kube-api-access-sz569". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.245061 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2d4c089-1da0-424e-9f99-008407498c84" (UID: "a2d4c089-1da0-424e-9f99-008407498c84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.250431 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-config" (OuterVolumeSpecName: "config") pod "a2d4c089-1da0-424e-9f99-008407498c84" (UID: "a2d4c089-1da0-424e-9f99-008407498c84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.254611 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2d4c089-1da0-424e-9f99-008407498c84" (UID: "a2d4c089-1da0-424e-9f99-008407498c84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.300580 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz569\" (UniqueName: \"kubernetes.io/projected/a2d4c089-1da0-424e-9f99-008407498c84-kube-api-access-sz569\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.300611 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.300620 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.300629 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d4c089-1da0-424e-9f99-008407498c84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.506523 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d69aef12-ac48-41f7-8a14-a561edab0ae7","Type":"ContainerStarted","Data":"91aa8b7933eda498160fe720c4f474e676fcad8053d1ffc08ce9de0eddca82b8"} Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.506564 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d69aef12-ac48-41f7-8a14-a561edab0ae7","Type":"ContainerStarted","Data":"e50e54344c8e8b8c4465f05fda6f13d5222f00a1171727f117c5308aa992b441"} Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.506693 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.511691 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" event={"ID":"c06e0509-685b-4010-9aef-1388bc28248d","Type":"ContainerStarted","Data":"30dfb255982a92013d72993cf922c1d38121fed24accaafc65f742335785a2ce"} Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.512522 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.515348 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" event={"ID":"8cd19c31-4252-4de7-a673-9da7aedcb785","Type":"ContainerStarted","Data":"137ef8da742a762887455130866543407aab4e626fc693e72bbf0ba327725c4f"} Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.517473 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.519617 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" event={"ID":"a2d4c089-1da0-424e-9f99-008407498c84","Type":"ContainerDied","Data":"b29df059ce6bb06a2e19ec34d7aab1136851e890c03a603fea3383c675972b5a"} Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.519674 4870 scope.go:117] "RemoveContainer" containerID="08a90b3a0b430b6fe52c0e915f7af8b01113438944cfd79325217422f6e2a114" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.519787 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b46657c9-lg2gz" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.527346 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dlnvg" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.527358 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gkrl7" event={"ID":"4406e732-41a8-48a1-954a-6dbe4483a79a","Type":"ContainerStarted","Data":"c2c9f5c52e6901456bac3bbffbdb6a069ed3e92d4e7f13f97a8b660b6688ff9d"} Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.538908 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.33422093 podStartE2EDuration="4.538888023s" podCreationTimestamp="2026-01-30 08:26:40 +0000 UTC" firstStartedPulling="2026-01-30 08:26:41.829444788 +0000 UTC m=+1040.524991897" lastFinishedPulling="2026-01-30 08:26:44.034111881 +0000 UTC m=+1042.729658990" observedRunningTime="2026-01-30 08:26:44.525159307 +0000 UTC m=+1043.220706416" watchObservedRunningTime="2026-01-30 08:26:44.538888023 +0000 UTC m=+1043.234435132" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.545207 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" podStartSLOduration=5.5451912100000005 podStartE2EDuration="5.54519121s" podCreationTimestamp="2026-01-30 08:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:26:44.542973795 +0000 UTC m=+1043.238520924" watchObservedRunningTime="2026-01-30 08:26:44.54519121 +0000 UTC m=+1043.240738319" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.565562 4870 scope.go:117] "RemoveContainer" containerID="59e1a6d7a27f28bf32baea3e14c93e926f69980f3fb15bd86a63262f4100c81d" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.573689 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" podStartSLOduration=3.573659923 podStartE2EDuration="3.573659923s" podCreationTimestamp="2026-01-30 08:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:26:44.572437977 +0000 UTC m=+1043.267985126" watchObservedRunningTime="2026-01-30 08:26:44.573659923 +0000 UTC m=+1043.269207032" Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.635571 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dlnvg"] Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.642441 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-dlnvg"] Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.649987 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-lg2gz"] Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.657065 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b46657c9-lg2gz"] Jan 30 08:26:44 crc kubenswrapper[4870]: I0130 08:26:44.768442 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d56d856cf-n69v7" podUID="033dbc66-0baa-46b3-8fda-3881303e4e40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: i/o timeout" Jan 30 08:26:46 crc kubenswrapper[4870]: I0130 08:26:46.087001 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccf52cf-97d4-4b27-8305-24222e79cc73" path="/var/lib/kubelet/pods/8ccf52cf-97d4-4b27-8305-24222e79cc73/volumes" Jan 30 08:26:46 crc kubenswrapper[4870]: I0130 08:26:46.088733 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d4c089-1da0-424e-9f99-008407498c84" path="/var/lib/kubelet/pods/a2d4c089-1da0-424e-9f99-008407498c84/volumes" Jan 30 08:26:46 crc kubenswrapper[4870]: I0130 08:26:46.150571 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:46 crc kubenswrapper[4870]: E0130 08:26:46.150727 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:26:46 crc kubenswrapper[4870]: E0130 08:26:46.150741 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:26:46 crc kubenswrapper[4870]: E0130 08:26:46.150786 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift podName:46634e41-7d5b-4181-b824-716bb37fca47 nodeName:}" failed. No retries permitted until 2026-01-30 08:26:50.150773789 +0000 UTC m=+1048.846320898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift") pod "swift-storage-0" (UID: "46634e41-7d5b-4181-b824-716bb37fca47") : configmap "swift-ring-files" not found Jan 30 08:26:47 crc kubenswrapper[4870]: I0130 08:26:47.553486 4870 generic.go:334] "Generic (PLEG): container finished" podID="31607550-5ccc-4b0b-9fbd-18007a61dcff" containerID="0a51df6c2a8c835be83788e6c0e9cc99339b4ef2dc9fce8dc1f0609f8b094b25" exitCode=0 Jan 30 08:26:47 crc kubenswrapper[4870]: I0130 08:26:47.553595 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31607550-5ccc-4b0b-9fbd-18007a61dcff","Type":"ContainerDied","Data":"0a51df6c2a8c835be83788e6c0e9cc99339b4ef2dc9fce8dc1f0609f8b094b25"} Jan 30 08:26:47 crc kubenswrapper[4870]: I0130 08:26:47.555907 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gkrl7" event={"ID":"4406e732-41a8-48a1-954a-6dbe4483a79a","Type":"ContainerStarted","Data":"eda5abff9e7bbd3cf114a7856edb58fc9717ac9b1210df0aa05845e17e8d856e"} Jan 30 08:26:47 crc kubenswrapper[4870]: I0130 08:26:47.564349 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a" containerID="36c6b3f0e330f4c5764ceb8dd30c047b28952964612889bae6f27160bd91c81f" exitCode=0 Jan 30 08:26:47 crc kubenswrapper[4870]: I0130 08:26:47.564420 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a","Type":"ContainerDied","Data":"36c6b3f0e330f4c5764ceb8dd30c047b28952964612889bae6f27160bd91c81f"} Jan 30 08:26:47 crc kubenswrapper[4870]: I0130 08:26:47.623082 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gkrl7" podStartSLOduration=3.11777077 podStartE2EDuration="5.623065272s" podCreationTimestamp="2026-01-30 08:26:42 +0000 UTC" firstStartedPulling="2026-01-30 08:26:43.98509933 +0000 UTC m=+1042.680646439" lastFinishedPulling="2026-01-30 08:26:46.490393832 +0000 UTC m=+1045.185940941" observedRunningTime="2026-01-30 08:26:47.621487846 +0000 UTC m=+1046.317034955" watchObservedRunningTime="2026-01-30 08:26:47.623065272 +0000 UTC m=+1046.318612391" Jan 30 08:26:49 crc kubenswrapper[4870]: I0130 08:26:49.373159 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:50 crc kubenswrapper[4870]: I0130 08:26:50.232058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:50 crc kubenswrapper[4870]: E0130 08:26:50.232393 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:26:50 crc kubenswrapper[4870]: E0130 08:26:50.232434 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:26:50 crc kubenswrapper[4870]: E0130 08:26:50.232532 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift podName:46634e41-7d5b-4181-b824-716bb37fca47 nodeName:}" failed. No retries permitted until 2026-01-30 08:26:58.232505038 +0000 UTC m=+1056.928052187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift") pod "swift-storage-0" (UID: "46634e41-7d5b-4181-b824-716bb37fca47") : configmap "swift-ring-files" not found Jan 30 08:26:51 crc kubenswrapper[4870]: I0130 08:26:51.621336 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:26:51 crc kubenswrapper[4870]: I0130 08:26:51.720076 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-qxdtt"] Jan 30 08:26:51 crc kubenswrapper[4870]: I0130 08:26:51.720331 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="dnsmasq-dns" containerID="cri-o://30dfb255982a92013d72993cf922c1d38121fed24accaafc65f742335785a2ce" gracePeriod=10 Jan 30 08:26:54 crc kubenswrapper[4870]: I0130 08:26:54.372073 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 30 08:26:54 crc kubenswrapper[4870]: I0130 08:26:54.643906 4870 generic.go:334] "Generic (PLEG): container finished" podID="c06e0509-685b-4010-9aef-1388bc28248d" containerID="30dfb255982a92013d72993cf922c1d38121fed24accaafc65f742335785a2ce" exitCode=0 Jan 30 08:26:54 crc kubenswrapper[4870]: I0130 08:26:54.643930 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" event={"ID":"c06e0509-685b-4010-9aef-1388bc28248d","Type":"ContainerDied","Data":"30dfb255982a92013d72993cf922c1d38121fed24accaafc65f742335785a2ce"} Jan 30 08:26:54 crc kubenswrapper[4870]: I0130 08:26:54.995415 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.025975 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smqm6\" (UniqueName: \"kubernetes.io/projected/c06e0509-685b-4010-9aef-1388bc28248d-kube-api-access-smqm6\") pod \"c06e0509-685b-4010-9aef-1388bc28248d\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.026112 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-sb\") pod \"c06e0509-685b-4010-9aef-1388bc28248d\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.026279 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-nb\") pod \"c06e0509-685b-4010-9aef-1388bc28248d\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.027763 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-config\") pod \"c06e0509-685b-4010-9aef-1388bc28248d\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.028491 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-dns-svc\") pod \"c06e0509-685b-4010-9aef-1388bc28248d\" (UID: \"c06e0509-685b-4010-9aef-1388bc28248d\") " Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.034099 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06e0509-685b-4010-9aef-1388bc28248d-kube-api-access-smqm6" (OuterVolumeSpecName: "kube-api-access-smqm6") pod "c06e0509-685b-4010-9aef-1388bc28248d" (UID: "c06e0509-685b-4010-9aef-1388bc28248d"). InnerVolumeSpecName "kube-api-access-smqm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.077056 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c06e0509-685b-4010-9aef-1388bc28248d" (UID: "c06e0509-685b-4010-9aef-1388bc28248d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.083377 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c06e0509-685b-4010-9aef-1388bc28248d" (UID: "c06e0509-685b-4010-9aef-1388bc28248d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.087275 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-config" (OuterVolumeSpecName: "config") pod "c06e0509-685b-4010-9aef-1388bc28248d" (UID: "c06e0509-685b-4010-9aef-1388bc28248d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.096231 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c06e0509-685b-4010-9aef-1388bc28248d" (UID: "c06e0509-685b-4010-9aef-1388bc28248d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.132338 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.132375 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.132385 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smqm6\" (UniqueName: \"kubernetes.io/projected/c06e0509-685b-4010-9aef-1388bc28248d-kube-api-access-smqm6\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.132396 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.132406 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06e0509-685b-4010-9aef-1388bc28248d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.657334 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" event={"ID":"c06e0509-685b-4010-9aef-1388bc28248d","Type":"ContainerDied","Data":"88033e8d4c732a95cee343d71511e6c36bd6bfa8fe952134c2f0152ffc8ba5b1"} Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.657413 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5899d7d557-qxdtt" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.657417 4870 scope.go:117] "RemoveContainer" containerID="30dfb255982a92013d72993cf922c1d38121fed24accaafc65f742335785a2ce" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.702809 4870 scope.go:117] "RemoveContainer" containerID="38cec7542121e32b4b05300244ea351b4cc373c4ddccf22126a36f614eae49e4" Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.711902 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-qxdtt"] Jan 30 08:26:55 crc kubenswrapper[4870]: I0130 08:26:55.721193 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5899d7d557-qxdtt"] Jan 30 08:26:56 crc kubenswrapper[4870]: I0130 08:26:56.091626 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06e0509-685b-4010-9aef-1388bc28248d" path="/var/lib/kubelet/pods/c06e0509-685b-4010-9aef-1388bc28248d/volumes" Jan 30 08:26:56 crc kubenswrapper[4870]: I0130 08:26:56.673932 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a","Type":"ContainerStarted","Data":"588fbb7f8fee6acd1e8c15f77c43a3c289e7cd1a0f7662ff0f25b69677dcd399"} Jan 30 08:26:57 crc kubenswrapper[4870]: I0130 08:26:57.692371 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31607550-5ccc-4b0b-9fbd-18007a61dcff","Type":"ContainerStarted","Data":"e70d386e14055e09ce722da0b01bbdd85f97f4ce6f3a9e4ffe4a30dc7296472a"} Jan 30 08:26:57 crc kubenswrapper[4870]: I0130 08:26:57.705403 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerStarted","Data":"4c9a2cc96afb4697dcd9efa47cf237f34c2e2a0fb97e86a04bed4e71098a047b"} Jan 30 08:26:57 crc kubenswrapper[4870]: I0130 08:26:57.731872 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=40.771462729 podStartE2EDuration="50.73184977s" podCreationTimestamp="2026-01-30 08:26:07 +0000 UTC" firstStartedPulling="2026-01-30 08:26:20.681624907 +0000 UTC m=+1019.377172016" lastFinishedPulling="2026-01-30 08:26:30.642011938 +0000 UTC m=+1029.337559057" observedRunningTime="2026-01-30 08:26:57.72951205 +0000 UTC m=+1056.425059199" watchObservedRunningTime="2026-01-30 08:26:57.73184977 +0000 UTC m=+1056.427396919" Jan 30 08:26:57 crc kubenswrapper[4870]: I0130 08:26:57.767839 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=41.302867335 podStartE2EDuration="51.767807785s" podCreationTimestamp="2026-01-30 08:26:06 +0000 UTC" firstStartedPulling="2026-01-30 08:26:20.670191122 +0000 UTC m=+1019.365738221" lastFinishedPulling="2026-01-30 08:26:31.135131562 +0000 UTC m=+1029.830678671" observedRunningTime="2026-01-30 08:26:57.763078624 +0000 UTC m=+1056.458625803" watchObservedRunningTime="2026-01-30 08:26:57.767807785 +0000 UTC m=+1056.463354934" Jan 30 08:26:58 crc kubenswrapper[4870]: I0130 08:26:58.300930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:26:58 crc kubenswrapper[4870]: E0130 08:26:58.301211 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:26:58 crc kubenswrapper[4870]: E0130 08:26:58.301236 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:26:58 crc kubenswrapper[4870]: E0130 08:26:58.301306 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift podName:46634e41-7d5b-4181-b824-716bb37fca47 nodeName:}" failed. No retries permitted until 2026-01-30 08:27:14.301280746 +0000 UTC m=+1072.996827895 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift") pod "swift-storage-0" (UID: "46634e41-7d5b-4181-b824-716bb37fca47") : configmap "swift-ring-files" not found Jan 30 08:26:59 crc kubenswrapper[4870]: I0130 08:26:59.111733 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 08:26:59 crc kubenswrapper[4870]: I0130 08:26:59.112537 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 08:27:00 crc kubenswrapper[4870]: I0130 08:27:00.735325 4870 generic.go:334] "Generic (PLEG): container finished" podID="4406e732-41a8-48a1-954a-6dbe4483a79a" containerID="eda5abff9e7bbd3cf114a7856edb58fc9717ac9b1210df0aa05845e17e8d856e" exitCode=0 Jan 30 08:27:00 crc kubenswrapper[4870]: I0130 08:27:00.735388 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gkrl7" event={"ID":"4406e732-41a8-48a1-954a-6dbe4483a79a","Type":"ContainerDied","Data":"eda5abff9e7bbd3cf114a7856edb58fc9717ac9b1210df0aa05845e17e8d856e"} Jan 30 08:27:01 crc kubenswrapper[4870]: I0130 08:27:01.431957 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 08:27:01 crc kubenswrapper[4870]: I0130 08:27:01.750084 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerStarted","Data":"0e6b5e7cf5bb76c5d4597f6eef6bb065c51cbad9cc2aae78711fd5e59b7109c8"} Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.064215 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197662 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-dispersionconf\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197740 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-swiftconf\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197803 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4406e732-41a8-48a1-954a-6dbe4483a79a-etc-swift\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197824 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-ring-data-devices\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197853 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-combined-ca-bundle\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197905 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-scripts\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.197927 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6z9k\" (UniqueName: \"kubernetes.io/projected/4406e732-41a8-48a1-954a-6dbe4483a79a-kube-api-access-d6z9k\") pod \"4406e732-41a8-48a1-954a-6dbe4483a79a\" (UID: \"4406e732-41a8-48a1-954a-6dbe4483a79a\") " Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.201260 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4406e732-41a8-48a1-954a-6dbe4483a79a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.202130 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.226570 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4406e732-41a8-48a1-954a-6dbe4483a79a-kube-api-access-d6z9k" (OuterVolumeSpecName: "kube-api-access-d6z9k") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "kube-api-access-d6z9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.231814 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.248310 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.280458 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-scripts" (OuterVolumeSpecName: "scripts") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.284822 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4406e732-41a8-48a1-954a-6dbe4483a79a" (UID: "4406e732-41a8-48a1-954a-6dbe4483a79a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300363 4870 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300394 4870 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300404 4870 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4406e732-41a8-48a1-954a-6dbe4483a79a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300412 4870 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300422 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406e732-41a8-48a1-954a-6dbe4483a79a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300430 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4406e732-41a8-48a1-954a-6dbe4483a79a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.300439 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6z9k\" (UniqueName: \"kubernetes.io/projected/4406e732-41a8-48a1-954a-6dbe4483a79a-kube-api-access-d6z9k\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.758863 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gkrl7" event={"ID":"4406e732-41a8-48a1-954a-6dbe4483a79a","Type":"ContainerDied","Data":"c2c9f5c52e6901456bac3bbffbdb6a069ed3e92d4e7f13f97a8b660b6688ff9d"} Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.758921 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c9f5c52e6901456bac3bbffbdb6a069ed3e92d4e7f13f97a8b660b6688ff9d" Jan 30 08:27:02 crc kubenswrapper[4870]: I0130 08:27:02.758941 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gkrl7" Jan 30 08:27:03 crc kubenswrapper[4870]: I0130 08:27:03.753526 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 08:27:03 crc kubenswrapper[4870]: I0130 08:27:03.918726 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 08:27:04 crc kubenswrapper[4870]: I0130 08:27:04.782587 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerStarted","Data":"49386d341f6cc7754611b6a6c194cd8140385a7b91dc049f19bb791c005d6a86"} Jan 30 08:27:04 crc kubenswrapper[4870]: I0130 08:27:04.819852 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.25183156 podStartE2EDuration="53.819828052s" podCreationTimestamp="2026-01-30 08:26:11 +0000 UTC" firstStartedPulling="2026-01-30 08:26:21.210237746 +0000 UTC m=+1019.905784855" lastFinishedPulling="2026-01-30 08:27:03.778234238 +0000 UTC m=+1062.473781347" observedRunningTime="2026-01-30 08:27:04.813092282 +0000 UTC m=+1063.508639471" watchObservedRunningTime="2026-01-30 08:27:04.819828052 +0000 UTC m=+1063.515375201" Jan 30 08:27:04 crc kubenswrapper[4870]: I0130 08:27:04.850653 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rwchz" podUID="496b707b-8de6-4228-b4fd-a48f3709586c" containerName="ovn-controller" probeResult="failure" output=< Jan 30 08:27:04 crc kubenswrapper[4870]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 08:27:04 crc kubenswrapper[4870]: > Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.543974 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.544030 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.614251 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.802013 4870 generic.go:334] "Generic (PLEG): container finished" podID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerID="55e6a4b3af15640088e3e1927ba88636a5cf35ec532fc2df3395e46ebcf07d79" exitCode=0 Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.802089 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"97f21b9d-25bf-4a64-94ef-51d83b662ab3","Type":"ContainerDied","Data":"55e6a4b3af15640088e3e1927ba88636a5cf35ec532fc2df3395e46ebcf07d79"} Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.803520 4870 generic.go:334] "Generic (PLEG): container finished" podID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerID="15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49" exitCode=0 Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.803585 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd","Type":"ContainerDied","Data":"15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49"} Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.805300 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ab884a9-b47a-476a-8f89-140093b96527" containerID="1990bb623e12d14af684a0ba5a125e7077393acb0eeb246cda1b7953fb41a71d" exitCode=0 Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.805351 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"2ab884a9-b47a-476a-8f89-140093b96527","Type":"ContainerDied","Data":"1990bb623e12d14af684a0ba5a125e7077393acb0eeb246cda1b7953fb41a71d"} Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.886245 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7k8wj"] Jan 30 08:27:07 crc kubenswrapper[4870]: E0130 08:27:07.886597 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d4c089-1da0-424e-9f99-008407498c84" containerName="dnsmasq-dns" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.886613 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d4c089-1da0-424e-9f99-008407498c84" containerName="dnsmasq-dns" Jan 30 08:27:07 crc kubenswrapper[4870]: E0130 08:27:07.886624 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4406e732-41a8-48a1-954a-6dbe4483a79a" containerName="swift-ring-rebalance" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.886631 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4406e732-41a8-48a1-954a-6dbe4483a79a" containerName="swift-ring-rebalance" Jan 30 08:27:07 crc kubenswrapper[4870]: E0130 08:27:07.886655 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="init" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.886661 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="init" Jan 30 08:27:07 crc kubenswrapper[4870]: E0130 08:27:07.886669 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d4c089-1da0-424e-9f99-008407498c84" containerName="init" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.886675 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d4c089-1da0-424e-9f99-008407498c84" containerName="init" Jan 30 08:27:07 crc kubenswrapper[4870]: E0130 08:27:07.886687 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="dnsmasq-dns" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.886692 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="dnsmasq-dns" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.887007 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4406e732-41a8-48a1-954a-6dbe4483a79a" containerName="swift-ring-rebalance" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.887029 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06e0509-685b-4010-9aef-1388bc28248d" containerName="dnsmasq-dns" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.887037 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d4c089-1da0-424e-9f99-008407498c84" containerName="dnsmasq-dns" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.887567 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.892280 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7k8wj"] Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.898412 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 08:27:07 crc kubenswrapper[4870]: I0130 08:27:07.909582 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.024813 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.031859 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2wdt\" (UniqueName: \"kubernetes.io/projected/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-kube-api-access-m2wdt\") pod \"root-account-create-update-7k8wj\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.032028 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-operator-scripts\") pod \"root-account-create-update-7k8wj\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.133618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-operator-scripts\") pod \"root-account-create-update-7k8wj\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.133740 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2wdt\" (UniqueName: \"kubernetes.io/projected/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-kube-api-access-m2wdt\") pod \"root-account-create-update-7k8wj\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.134429 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-operator-scripts\") pod \"root-account-create-update-7k8wj\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.157431 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2wdt\" (UniqueName: \"kubernetes.io/projected/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-kube-api-access-m2wdt\") pod \"root-account-create-update-7k8wj\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.344218 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.816474 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"2ab884a9-b47a-476a-8f89-140093b96527","Type":"ContainerStarted","Data":"255f9bb78983d1770c7c49c251279626a4a8ad762e011e44910f271f2983b30d"} Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.818503 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.820608 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"97f21b9d-25bf-4a64-94ef-51d83b662ab3","Type":"ContainerStarted","Data":"2d90fc261d4a0e6355b34b516c467ce7b3ce867fbf835cf5614291d45b33a700"} Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.820901 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.822199 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd","Type":"ContainerStarted","Data":"121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f"} Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.823315 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.860886 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=55.477880993 podStartE2EDuration="1m4.860852635s" podCreationTimestamp="2026-01-30 08:26:04 +0000 UTC" firstStartedPulling="2026-01-30 08:26:21.259064826 +0000 UTC m=+1019.954611935" lastFinishedPulling="2026-01-30 08:26:30.642036468 +0000 UTC m=+1029.337583577" observedRunningTime="2026-01-30 08:27:08.853546789 +0000 UTC m=+1067.549093928" watchObservedRunningTime="2026-01-30 08:27:08.860852635 +0000 UTC m=+1067.556399744" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.862122 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7k8wj"] Jan 30 08:27:08 crc kubenswrapper[4870]: W0130 08:27:08.874259 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85bfb16d_8b6c_46e2_a7e3_0a5051aa66df.slice/crio-efac2cd7376b7e1b5a56aa00876b64c5d5188202a07cca8938cbd3daf70f1cc0 WatchSource:0}: Error finding container efac2cd7376b7e1b5a56aa00876b64c5d5188202a07cca8938cbd3daf70f1cc0: Status 404 returned error can't find the container with id efac2cd7376b7e1b5a56aa00876b64c5d5188202a07cca8938cbd3daf70f1cc0 Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.887926 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.30244599 podStartE2EDuration="1m4.887901966s" podCreationTimestamp="2026-01-30 08:26:04 +0000 UTC" firstStartedPulling="2026-01-30 08:26:20.68138337 +0000 UTC m=+1019.376930479" lastFinishedPulling="2026-01-30 08:26:29.266839316 +0000 UTC m=+1027.962386455" observedRunningTime="2026-01-30 08:27:08.877838428 +0000 UTC m=+1067.573385577" watchObservedRunningTime="2026-01-30 08:27:08.887901966 +0000 UTC m=+1067.583449105" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.917497 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.680512325 podStartE2EDuration="1m4.917474951s" podCreationTimestamp="2026-01-30 08:26:04 +0000 UTC" firstStartedPulling="2026-01-30 08:26:20.40499617 +0000 UTC m=+1019.100543279" lastFinishedPulling="2026-01-30 08:26:30.641958776 +0000 UTC m=+1029.337505905" observedRunningTime="2026-01-30 08:27:08.913227896 +0000 UTC m=+1067.608775005" watchObservedRunningTime="2026-01-30 08:27:08.917474951 +0000 UTC m=+1067.613022070" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.969840 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zdg4s"] Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.971093 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:08 crc kubenswrapper[4870]: I0130 08:27:08.998500 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zdg4s"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.063684 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68t2\" (UniqueName: \"kubernetes.io/projected/5ac3a52d-4734-4be8-9530-6b7b535664f8-kube-api-access-r68t2\") pod \"keystone-db-create-zdg4s\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.063803 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3a52d-4734-4be8-9530-6b7b535664f8-operator-scripts\") pod \"keystone-db-create-zdg4s\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.073387 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-be9b-account-create-update-lgqm6"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.080013 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.082351 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.094428 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-be9b-account-create-update-lgqm6"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.165561 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68t2\" (UniqueName: \"kubernetes.io/projected/5ac3a52d-4734-4be8-9530-6b7b535664f8-kube-api-access-r68t2\") pod \"keystone-db-create-zdg4s\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.165718 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93cd49cf-8353-49eb-89d2-2d3630503d9f-operator-scripts\") pod \"keystone-be9b-account-create-update-lgqm6\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.167175 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3a52d-4734-4be8-9530-6b7b535664f8-operator-scripts\") pod \"keystone-db-create-zdg4s\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.167240 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3a52d-4734-4be8-9530-6b7b535664f8-operator-scripts\") pod \"keystone-db-create-zdg4s\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.167289 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kjs\" (UniqueName: \"kubernetes.io/projected/93cd49cf-8353-49eb-89d2-2d3630503d9f-kube-api-access-x5kjs\") pod \"keystone-be9b-account-create-update-lgqm6\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.189870 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68t2\" (UniqueName: \"kubernetes.io/projected/5ac3a52d-4734-4be8-9530-6b7b535664f8-kube-api-access-r68t2\") pod \"keystone-db-create-zdg4s\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.268950 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5kjs\" (UniqueName: \"kubernetes.io/projected/93cd49cf-8353-49eb-89d2-2d3630503d9f-kube-api-access-x5kjs\") pod \"keystone-be9b-account-create-update-lgqm6\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.269456 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93cd49cf-8353-49eb-89d2-2d3630503d9f-operator-scripts\") pod \"keystone-be9b-account-create-update-lgqm6\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.270197 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93cd49cf-8353-49eb-89d2-2d3630503d9f-operator-scripts\") pod \"keystone-be9b-account-create-update-lgqm6\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.274171 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-772bw"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.275219 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.290121 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-772bw"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.295999 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5kjs\" (UniqueName: \"kubernetes.io/projected/93cd49cf-8353-49eb-89d2-2d3630503d9f-kube-api-access-x5kjs\") pod \"keystone-be9b-account-create-update-lgqm6\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.308724 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.370714 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b24b-account-create-update-d2n4p"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.371392 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e990d4f-b684-47e6-8056-08cf765aa33d-operator-scripts\") pod \"placement-db-create-772bw\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.371571 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cc7\" (UniqueName: \"kubernetes.io/projected/9e990d4f-b684-47e6-8056-08cf765aa33d-kube-api-access-m4cc7\") pod \"placement-db-create-772bw\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.371691 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.374329 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.391567 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b24b-account-create-update-d2n4p"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.410089 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.485253 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98chn\" (UniqueName: \"kubernetes.io/projected/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-kube-api-access-98chn\") pod \"placement-b24b-account-create-update-d2n4p\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.485342 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-operator-scripts\") pod \"placement-b24b-account-create-update-d2n4p\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.485392 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cc7\" (UniqueName: \"kubernetes.io/projected/9e990d4f-b684-47e6-8056-08cf765aa33d-kube-api-access-m4cc7\") pod \"placement-db-create-772bw\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.485445 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e990d4f-b684-47e6-8056-08cf765aa33d-operator-scripts\") pod \"placement-db-create-772bw\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.486291 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e990d4f-b684-47e6-8056-08cf765aa33d-operator-scripts\") pod \"placement-db-create-772bw\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.504355 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cc7\" (UniqueName: \"kubernetes.io/projected/9e990d4f-b684-47e6-8056-08cf765aa33d-kube-api-access-m4cc7\") pod \"placement-db-create-772bw\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.587248 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-operator-scripts\") pod \"placement-b24b-account-create-update-d2n4p\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.587656 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98chn\" (UniqueName: \"kubernetes.io/projected/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-kube-api-access-98chn\") pod \"placement-b24b-account-create-update-d2n4p\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.587898 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-operator-scripts\") pod \"placement-b24b-account-create-update-d2n4p\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.589996 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-772bw" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.602414 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98chn\" (UniqueName: \"kubernetes.io/projected/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-kube-api-access-98chn\") pod \"placement-b24b-account-create-update-d2n4p\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.688349 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.793051 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zdg4s"] Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.832091 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rwchz" podUID="496b707b-8de6-4228-b4fd-a48f3709586c" containerName="ovn-controller" probeResult="failure" output=< Jan 30 08:27:09 crc kubenswrapper[4870]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 08:27:09 crc kubenswrapper[4870]: > Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.836303 4870 generic.go:334] "Generic (PLEG): container finished" podID="85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" containerID="7a7adac6f43dd00107198ca12f07a56a507d2b37982cb644a01747e8eb0b5b52" exitCode=0 Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.836365 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7k8wj" event={"ID":"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df","Type":"ContainerDied","Data":"7a7adac6f43dd00107198ca12f07a56a507d2b37982cb644a01747e8eb0b5b52"} Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.836393 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7k8wj" event={"ID":"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df","Type":"ContainerStarted","Data":"efac2cd7376b7e1b5a56aa00876b64c5d5188202a07cca8938cbd3daf70f1cc0"} Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.838946 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zdg4s" event={"ID":"5ac3a52d-4734-4be8-9530-6b7b535664f8","Type":"ContainerStarted","Data":"9d8f67b6ce532dd69259548889e2d099125d75a9ac7b7e4d54afd39958c000d2"} Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.852172 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.871950 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gznh8" Jan 30 08:27:09 crc kubenswrapper[4870]: I0130 08:27:09.946367 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-be9b-account-create-update-lgqm6"] Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.086599 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-772bw"] Jan 30 08:27:10 crc kubenswrapper[4870]: W0130 08:27:10.093053 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e990d4f_b684_47e6_8056_08cf765aa33d.slice/crio-714d1725f1b75a23ac0a271ba8ee0cf966f9c24054a5cb144dabe40f11e9f5c3 WatchSource:0}: Error finding container 714d1725f1b75a23ac0a271ba8ee0cf966f9c24054a5cb144dabe40f11e9f5c3: Status 404 returned error can't find the container with id 714d1725f1b75a23ac0a271ba8ee0cf966f9c24054a5cb144dabe40f11e9f5c3 Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.107930 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rwchz-config-88kmx"] Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.109763 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.114791 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.132585 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rwchz-config-88kmx"] Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.198510 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run-ovn\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.198569 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-scripts\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.198749 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pkr\" (UniqueName: \"kubernetes.io/projected/377059c1-2286-4127-b4cc-d19ef6bac327-kube-api-access-52pkr\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.199033 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.199174 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-log-ovn\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.199257 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-additional-scripts\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: W0130 08:27:10.240350 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b66abfb_27d1_415e_abf2_2cb855a2bcaf.slice/crio-46c4c020377146f5485f00dabefc993bf1e423c4a1be085848906b1eeb21803f WatchSource:0}: Error finding container 46c4c020377146f5485f00dabefc993bf1e423c4a1be085848906b1eeb21803f: Status 404 returned error can't find the container with id 46c4c020377146f5485f00dabefc993bf1e423c4a1be085848906b1eeb21803f Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.243933 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b24b-account-create-update-d2n4p"] Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301071 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301139 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-log-ovn\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301200 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-additional-scripts\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301227 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run-ovn\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301253 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-scripts\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52pkr\" (UniqueName: \"kubernetes.io/projected/377059c1-2286-4127-b4cc-d19ef6bac327-kube-api-access-52pkr\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301436 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301535 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run-ovn\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.301588 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-log-ovn\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.302359 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-additional-scripts\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.303549 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-scripts\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.321144 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52pkr\" (UniqueName: \"kubernetes.io/projected/377059c1-2286-4127-b4cc-d19ef6bac327-kube-api-access-52pkr\") pod \"ovn-controller-rwchz-config-88kmx\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.431782 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.846629 4870 generic.go:334] "Generic (PLEG): container finished" podID="5ac3a52d-4734-4be8-9530-6b7b535664f8" containerID="ae557205b83ba573012321c0b15a5b47277e108dca93d5acd055965c34b03da8" exitCode=0 Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.846703 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zdg4s" event={"ID":"5ac3a52d-4734-4be8-9530-6b7b535664f8","Type":"ContainerDied","Data":"ae557205b83ba573012321c0b15a5b47277e108dca93d5acd055965c34b03da8"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.847957 4870 generic.go:334] "Generic (PLEG): container finished" podID="93cd49cf-8353-49eb-89d2-2d3630503d9f" containerID="6dec8f4d9911b49219f94545d1dff11226dd491baa26e53a02289cf2ce287699" exitCode=0 Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.848033 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-be9b-account-create-update-lgqm6" event={"ID":"93cd49cf-8353-49eb-89d2-2d3630503d9f","Type":"ContainerDied","Data":"6dec8f4d9911b49219f94545d1dff11226dd491baa26e53a02289cf2ce287699"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.848225 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-be9b-account-create-update-lgqm6" event={"ID":"93cd49cf-8353-49eb-89d2-2d3630503d9f","Type":"ContainerStarted","Data":"0c370191997744f514fd76f73204f4cad83d46ceaa3ed748d4231b4f9e72df11"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.849342 4870 generic.go:334] "Generic (PLEG): container finished" podID="3b66abfb-27d1-415e-abf2-2cb855a2bcaf" containerID="78530e29e6f33fe9e6244539f845bfc30d3752986bcfd2b607b62cc6f7d5aab3" exitCode=0 Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.849404 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b24b-account-create-update-d2n4p" event={"ID":"3b66abfb-27d1-415e-abf2-2cb855a2bcaf","Type":"ContainerDied","Data":"78530e29e6f33fe9e6244539f845bfc30d3752986bcfd2b607b62cc6f7d5aab3"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.849430 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b24b-account-create-update-d2n4p" event={"ID":"3b66abfb-27d1-415e-abf2-2cb855a2bcaf","Type":"ContainerStarted","Data":"46c4c020377146f5485f00dabefc993bf1e423c4a1be085848906b1eeb21803f"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.850561 4870 generic.go:334] "Generic (PLEG): container finished" podID="9e990d4f-b684-47e6-8056-08cf765aa33d" containerID="7e7325618d20bdeab54c732bc7a397cb58a9db4a697a599a002533f4811bf8bd" exitCode=0 Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.850590 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-772bw" event={"ID":"9e990d4f-b684-47e6-8056-08cf765aa33d","Type":"ContainerDied","Data":"7e7325618d20bdeab54c732bc7a397cb58a9db4a697a599a002533f4811bf8bd"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.850614 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-772bw" event={"ID":"9e990d4f-b684-47e6-8056-08cf765aa33d","Type":"ContainerStarted","Data":"714d1725f1b75a23ac0a271ba8ee0cf966f9c24054a5cb144dabe40f11e9f5c3"} Jan 30 08:27:10 crc kubenswrapper[4870]: I0130 08:27:10.958136 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rwchz-config-88kmx"] Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.239977 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.322031 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2wdt\" (UniqueName: \"kubernetes.io/projected/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-kube-api-access-m2wdt\") pod \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.322235 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-operator-scripts\") pod \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\" (UID: \"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df\") " Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.323016 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" (UID: "85bfb16d-8b6c-46e2-a7e3-0a5051aa66df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.329910 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-kube-api-access-m2wdt" (OuterVolumeSpecName: "kube-api-access-m2wdt") pod "85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" (UID: "85bfb16d-8b6c-46e2-a7e3-0a5051aa66df"). InnerVolumeSpecName "kube-api-access-m2wdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.337202 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-x6s7d"] Jan 30 08:27:11 crc kubenswrapper[4870]: E0130 08:27:11.338090 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" containerName="mariadb-account-create-update" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.338128 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" containerName="mariadb-account-create-update" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.338379 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" containerName="mariadb-account-create-update" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.341247 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.347920 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-x6s7d"] Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.424515 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-operator-scripts\") pod \"watcher-db-create-x6s7d\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.424579 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrcr\" (UniqueName: \"kubernetes.io/projected/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-kube-api-access-2jrcr\") pod \"watcher-db-create-x6s7d\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.424779 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.424794 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2wdt\" (UniqueName: \"kubernetes.io/projected/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df-kube-api-access-m2wdt\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.425649 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-a8a4-account-create-update-8gm2f"] Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.426639 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.431599 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.442460 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a8a4-account-create-update-8gm2f"] Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.526289 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a2047-d3db-4822-89b3-52fcd65d6e09-operator-scripts\") pod \"watcher-a8a4-account-create-update-8gm2f\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.526378 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-operator-scripts\") pod \"watcher-db-create-x6s7d\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.526573 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zxg9\" (UniqueName: \"kubernetes.io/projected/585a2047-d3db-4822-89b3-52fcd65d6e09-kube-api-access-8zxg9\") pod \"watcher-a8a4-account-create-update-8gm2f\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.526624 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrcr\" (UniqueName: \"kubernetes.io/projected/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-kube-api-access-2jrcr\") pod \"watcher-db-create-x6s7d\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.527145 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-operator-scripts\") pod \"watcher-db-create-x6s7d\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.545480 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrcr\" (UniqueName: \"kubernetes.io/projected/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-kube-api-access-2jrcr\") pod \"watcher-db-create-x6s7d\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.628893 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a2047-d3db-4822-89b3-52fcd65d6e09-operator-scripts\") pod \"watcher-a8a4-account-create-update-8gm2f\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.628991 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zxg9\" (UniqueName: \"kubernetes.io/projected/585a2047-d3db-4822-89b3-52fcd65d6e09-kube-api-access-8zxg9\") pod \"watcher-a8a4-account-create-update-8gm2f\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.629797 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a2047-d3db-4822-89b3-52fcd65d6e09-operator-scripts\") pod \"watcher-a8a4-account-create-update-8gm2f\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.651607 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zxg9\" (UniqueName: \"kubernetes.io/projected/585a2047-d3db-4822-89b3-52fcd65d6e09-kube-api-access-8zxg9\") pod \"watcher-a8a4-account-create-update-8gm2f\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.657715 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.744407 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.868338 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7k8wj" event={"ID":"85bfb16d-8b6c-46e2-a7e3-0a5051aa66df","Type":"ContainerDied","Data":"efac2cd7376b7e1b5a56aa00876b64c5d5188202a07cca8938cbd3daf70f1cc0"} Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.868378 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efac2cd7376b7e1b5a56aa00876b64c5d5188202a07cca8938cbd3daf70f1cc0" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.868402 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7k8wj" Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.876667 4870 generic.go:334] "Generic (PLEG): container finished" podID="377059c1-2286-4127-b4cc-d19ef6bac327" containerID="cd0dfeab70fb307cbb6535bdd2b5daa2556dc7c49a1bf88e90112f1cde7b135d" exitCode=0 Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.877477 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-88kmx" event={"ID":"377059c1-2286-4127-b4cc-d19ef6bac327","Type":"ContainerDied","Data":"cd0dfeab70fb307cbb6535bdd2b5daa2556dc7c49a1bf88e90112f1cde7b135d"} Jan 30 08:27:11 crc kubenswrapper[4870]: I0130 08:27:11.877500 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-88kmx" event={"ID":"377059c1-2286-4127-b4cc-d19ef6bac327","Type":"ContainerStarted","Data":"aa51696b6012176e76d2c25bdaddadb683836a8392f3031a0bab8fd34cf74ff2"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.104480 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-x6s7d"] Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.352783 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.362887 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.437304 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-772bw" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.452130 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93cd49cf-8353-49eb-89d2-2d3630503d9f-operator-scripts\") pod \"93cd49cf-8353-49eb-89d2-2d3630503d9f\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.452968 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cd49cf-8353-49eb-89d2-2d3630503d9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93cd49cf-8353-49eb-89d2-2d3630503d9f" (UID: "93cd49cf-8353-49eb-89d2-2d3630503d9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.453152 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5kjs\" (UniqueName: \"kubernetes.io/projected/93cd49cf-8353-49eb-89d2-2d3630503d9f-kube-api-access-x5kjs\") pod \"93cd49cf-8353-49eb-89d2-2d3630503d9f\" (UID: \"93cd49cf-8353-49eb-89d2-2d3630503d9f\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.454263 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93cd49cf-8353-49eb-89d2-2d3630503d9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.455749 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.457992 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cd49cf-8353-49eb-89d2-2d3630503d9f-kube-api-access-x5kjs" (OuterVolumeSpecName: "kube-api-access-x5kjs") pod "93cd49cf-8353-49eb-89d2-2d3630503d9f" (UID: "93cd49cf-8353-49eb-89d2-2d3630503d9f"). InnerVolumeSpecName "kube-api-access-x5kjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.527950 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a8a4-account-create-update-8gm2f"] Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555338 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3a52d-4734-4be8-9530-6b7b535664f8-operator-scripts\") pod \"5ac3a52d-4734-4be8-9530-6b7b535664f8\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555388 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98chn\" (UniqueName: \"kubernetes.io/projected/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-kube-api-access-98chn\") pod \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555477 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e990d4f-b684-47e6-8056-08cf765aa33d-operator-scripts\") pod \"9e990d4f-b684-47e6-8056-08cf765aa33d\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555517 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r68t2\" (UniqueName: \"kubernetes.io/projected/5ac3a52d-4734-4be8-9530-6b7b535664f8-kube-api-access-r68t2\") pod \"5ac3a52d-4734-4be8-9530-6b7b535664f8\" (UID: \"5ac3a52d-4734-4be8-9530-6b7b535664f8\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555579 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4cc7\" (UniqueName: \"kubernetes.io/projected/9e990d4f-b684-47e6-8056-08cf765aa33d-kube-api-access-m4cc7\") pod \"9e990d4f-b684-47e6-8056-08cf765aa33d\" (UID: \"9e990d4f-b684-47e6-8056-08cf765aa33d\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555642 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-operator-scripts\") pod \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\" (UID: \"3b66abfb-27d1-415e-abf2-2cb855a2bcaf\") " Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555848 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac3a52d-4734-4be8-9530-6b7b535664f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ac3a52d-4734-4be8-9530-6b7b535664f8" (UID: "5ac3a52d-4734-4be8-9530-6b7b535664f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555931 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e990d4f-b684-47e6-8056-08cf765aa33d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e990d4f-b684-47e6-8056-08cf765aa33d" (UID: "9e990d4f-b684-47e6-8056-08cf765aa33d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.555971 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5kjs\" (UniqueName: \"kubernetes.io/projected/93cd49cf-8353-49eb-89d2-2d3630503d9f-kube-api-access-x5kjs\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.556041 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3a52d-4734-4be8-9530-6b7b535664f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.556324 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b66abfb-27d1-415e-abf2-2cb855a2bcaf" (UID: "3b66abfb-27d1-415e-abf2-2cb855a2bcaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.559417 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-kube-api-access-98chn" (OuterVolumeSpecName: "kube-api-access-98chn") pod "3b66abfb-27d1-415e-abf2-2cb855a2bcaf" (UID: "3b66abfb-27d1-415e-abf2-2cb855a2bcaf"). InnerVolumeSpecName "kube-api-access-98chn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.559551 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e990d4f-b684-47e6-8056-08cf765aa33d-kube-api-access-m4cc7" (OuterVolumeSpecName: "kube-api-access-m4cc7") pod "9e990d4f-b684-47e6-8056-08cf765aa33d" (UID: "9e990d4f-b684-47e6-8056-08cf765aa33d"). InnerVolumeSpecName "kube-api-access-m4cc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.559692 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac3a52d-4734-4be8-9530-6b7b535664f8-kube-api-access-r68t2" (OuterVolumeSpecName: "kube-api-access-r68t2") pod "5ac3a52d-4734-4be8-9530-6b7b535664f8" (UID: "5ac3a52d-4734-4be8-9530-6b7b535664f8"). InnerVolumeSpecName "kube-api-access-r68t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.613604 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.617039 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.657341 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4cc7\" (UniqueName: \"kubernetes.io/projected/9e990d4f-b684-47e6-8056-08cf765aa33d-kube-api-access-m4cc7\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.657377 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.657392 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98chn\" (UniqueName: \"kubernetes.io/projected/3b66abfb-27d1-415e-abf2-2cb855a2bcaf-kube-api-access-98chn\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.657403 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e990d4f-b684-47e6-8056-08cf765aa33d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.657416 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r68t2\" (UniqueName: \"kubernetes.io/projected/5ac3a52d-4734-4be8-9530-6b7b535664f8-kube-api-access-r68t2\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.888953 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b24b-account-create-update-d2n4p" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.888948 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b24b-account-create-update-d2n4p" event={"ID":"3b66abfb-27d1-415e-abf2-2cb855a2bcaf","Type":"ContainerDied","Data":"46c4c020377146f5485f00dabefc993bf1e423c4a1be085848906b1eeb21803f"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.889182 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c4c020377146f5485f00dabefc993bf1e423c4a1be085848906b1eeb21803f" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.890677 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a8a4-account-create-update-8gm2f" event={"ID":"585a2047-d3db-4822-89b3-52fcd65d6e09","Type":"ContainerStarted","Data":"e32e8f9eb095a0767af1259a467ea84160f17bae2cb726e02486629d03a26d33"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.890713 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a8a4-account-create-update-8gm2f" event={"ID":"585a2047-d3db-4822-89b3-52fcd65d6e09","Type":"ContainerStarted","Data":"0045c9e1377a9668a3ada55443745aaffe10a93f434201f632113f32311c59ee"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.893310 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zdg4s" event={"ID":"5ac3a52d-4734-4be8-9530-6b7b535664f8","Type":"ContainerDied","Data":"9d8f67b6ce532dd69259548889e2d099125d75a9ac7b7e4d54afd39958c000d2"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.893334 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d8f67b6ce532dd69259548889e2d099125d75a9ac7b7e4d54afd39958c000d2" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.893377 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zdg4s" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.906396 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-be9b-account-create-update-lgqm6" event={"ID":"93cd49cf-8353-49eb-89d2-2d3630503d9f","Type":"ContainerDied","Data":"0c370191997744f514fd76f73204f4cad83d46ceaa3ed748d4231b4f9e72df11"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.906435 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c370191997744f514fd76f73204f4cad83d46ceaa3ed748d4231b4f9e72df11" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.906495 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-be9b-account-create-update-lgqm6" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.910780 4870 generic.go:334] "Generic (PLEG): container finished" podID="40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" containerID="5726ace895a9d7102cc621cf411a4327a47995798d8abdba29b293b762399c80" exitCode=0 Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.910897 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-x6s7d" event={"ID":"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3","Type":"ContainerDied","Data":"5726ace895a9d7102cc621cf411a4327a47995798d8abdba29b293b762399c80"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.910953 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-x6s7d" event={"ID":"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3","Type":"ContainerStarted","Data":"1f97d6a64f70d41e661a86d41b4c29baca98bfc6e559fe436d16cd50b549a65d"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.914076 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-772bw" event={"ID":"9e990d4f-b684-47e6-8056-08cf765aa33d","Type":"ContainerDied","Data":"714d1725f1b75a23ac0a271ba8ee0cf966f9c24054a5cb144dabe40f11e9f5c3"} Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.914131 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714d1725f1b75a23ac0a271ba8ee0cf966f9c24054a5cb144dabe40f11e9f5c3" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.914206 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-772bw" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.917126 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:12 crc kubenswrapper[4870]: I0130 08:27:12.938267 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-a8a4-account-create-update-8gm2f" podStartSLOduration=1.9382475160000001 podStartE2EDuration="1.938247516s" podCreationTimestamp="2026-01-30 08:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:12.934705841 +0000 UTC m=+1071.630252950" watchObservedRunningTime="2026-01-30 08:27:12.938247516 +0000 UTC m=+1071.633794625" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.351932 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.477559 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52pkr\" (UniqueName: \"kubernetes.io/projected/377059c1-2286-4127-b4cc-d19ef6bac327-kube-api-access-52pkr\") pod \"377059c1-2286-4127-b4cc-d19ef6bac327\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.477753 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-scripts\") pod \"377059c1-2286-4127-b4cc-d19ef6bac327\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.477805 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-log-ovn\") pod \"377059c1-2286-4127-b4cc-d19ef6bac327\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.477827 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-additional-scripts\") pod \"377059c1-2286-4127-b4cc-d19ef6bac327\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.477942 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run-ovn\") pod \"377059c1-2286-4127-b4cc-d19ef6bac327\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.477989 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run\") pod \"377059c1-2286-4127-b4cc-d19ef6bac327\" (UID: \"377059c1-2286-4127-b4cc-d19ef6bac327\") " Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.482089 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "377059c1-2286-4127-b4cc-d19ef6bac327" (UID: "377059c1-2286-4127-b4cc-d19ef6bac327"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.482121 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "377059c1-2286-4127-b4cc-d19ef6bac327" (UID: "377059c1-2286-4127-b4cc-d19ef6bac327"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.482842 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "377059c1-2286-4127-b4cc-d19ef6bac327" (UID: "377059c1-2286-4127-b4cc-d19ef6bac327"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.483436 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-scripts" (OuterVolumeSpecName: "scripts") pod "377059c1-2286-4127-b4cc-d19ef6bac327" (UID: "377059c1-2286-4127-b4cc-d19ef6bac327"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.486113 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run" (OuterVolumeSpecName: "var-run") pod "377059c1-2286-4127-b4cc-d19ef6bac327" (UID: "377059c1-2286-4127-b4cc-d19ef6bac327"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.508838 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377059c1-2286-4127-b4cc-d19ef6bac327-kube-api-access-52pkr" (OuterVolumeSpecName: "kube-api-access-52pkr") pod "377059c1-2286-4127-b4cc-d19ef6bac327" (UID: "377059c1-2286-4127-b4cc-d19ef6bac327"). InnerVolumeSpecName "kube-api-access-52pkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.581113 4870 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.581143 4870 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.581153 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52pkr\" (UniqueName: \"kubernetes.io/projected/377059c1-2286-4127-b4cc-d19ef6bac327-kube-api-access-52pkr\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.581166 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.581174 4870 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/377059c1-2286-4127-b4cc-d19ef6bac327-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.581183 4870 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/377059c1-2286-4127-b4cc-d19ef6bac327-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.923048 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-88kmx" event={"ID":"377059c1-2286-4127-b4cc-d19ef6bac327","Type":"ContainerDied","Data":"aa51696b6012176e76d2c25bdaddadb683836a8392f3031a0bab8fd34cf74ff2"} Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.923958 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa51696b6012176e76d2c25bdaddadb683836a8392f3031a0bab8fd34cf74ff2" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.923154 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-88kmx" Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.924400 4870 generic.go:334] "Generic (PLEG): container finished" podID="585a2047-d3db-4822-89b3-52fcd65d6e09" containerID="e32e8f9eb095a0767af1259a467ea84160f17bae2cb726e02486629d03a26d33" exitCode=0 Jan 30 08:27:13 crc kubenswrapper[4870]: I0130 08:27:13.924865 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a8a4-account-create-update-8gm2f" event={"ID":"585a2047-d3db-4822-89b3-52fcd65d6e09","Type":"ContainerDied","Data":"e32e8f9eb095a0767af1259a467ea84160f17bae2cb726e02486629d03a26d33"} Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.297251 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.396126 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-operator-scripts\") pod \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.396335 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jrcr\" (UniqueName: \"kubernetes.io/projected/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-kube-api-access-2jrcr\") pod \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\" (UID: \"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3\") " Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.396790 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.402039 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-kube-api-access-2jrcr" (OuterVolumeSpecName: "kube-api-access-2jrcr") pod "40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" (UID: "40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3"). InnerVolumeSpecName "kube-api-access-2jrcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.402488 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" (UID: "40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.415206 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/46634e41-7d5b-4181-b824-716bb37fca47-etc-swift\") pod \"swift-storage-0\" (UID: \"46634e41-7d5b-4181-b824-716bb37fca47\") " pod="openstack/swift-storage-0" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.478260 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rwchz-config-88kmx"] Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.484734 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rwchz-config-88kmx"] Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.498174 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.498216 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jrcr\" (UniqueName: \"kubernetes.io/projected/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3-kube-api-access-2jrcr\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.532025 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.579540 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rwchz-config-9v2n7"] Jan 30 08:27:14 crc kubenswrapper[4870]: E0130 08:27:14.580140 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377059c1-2286-4127-b4cc-d19ef6bac327" containerName="ovn-config" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580239 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="377059c1-2286-4127-b4cc-d19ef6bac327" containerName="ovn-config" Jan 30 08:27:14 crc kubenswrapper[4870]: E0130 08:27:14.580256 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cd49cf-8353-49eb-89d2-2d3630503d9f" containerName="mariadb-account-create-update" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580262 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cd49cf-8353-49eb-89d2-2d3630503d9f" containerName="mariadb-account-create-update" Jan 30 08:27:14 crc kubenswrapper[4870]: E0130 08:27:14.580273 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e990d4f-b684-47e6-8056-08cf765aa33d" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580280 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e990d4f-b684-47e6-8056-08cf765aa33d" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: E0130 08:27:14.580322 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b66abfb-27d1-415e-abf2-2cb855a2bcaf" containerName="mariadb-account-create-update" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580329 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b66abfb-27d1-415e-abf2-2cb855a2bcaf" containerName="mariadb-account-create-update" Jan 30 08:27:14 crc kubenswrapper[4870]: E0130 08:27:14.580343 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580350 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: E0130 08:27:14.580360 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac3a52d-4734-4be8-9530-6b7b535664f8" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580366 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac3a52d-4734-4be8-9530-6b7b535664f8" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580572 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cd49cf-8353-49eb-89d2-2d3630503d9f" containerName="mariadb-account-create-update" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580586 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b66abfb-27d1-415e-abf2-2cb855a2bcaf" containerName="mariadb-account-create-update" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580599 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580611 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e990d4f-b684-47e6-8056-08cf765aa33d" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580622 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac3a52d-4734-4be8-9530-6b7b535664f8" containerName="mariadb-database-create" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.580664 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="377059c1-2286-4127-b4cc-d19ef6bac327" containerName="ovn-config" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.581444 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.584752 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.596665 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rwchz-config-9v2n7"] Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.701268 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run-ovn\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.701651 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-additional-scripts\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.701684 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.701722 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-scripts\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.701769 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-log-ovn\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.701815 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4c72\" (UniqueName: \"kubernetes.io/projected/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-kube-api-access-v4c72\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.802746 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.802851 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-scripts\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803132 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-log-ovn\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4c72\" (UniqueName: \"kubernetes.io/projected/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-kube-api-access-v4c72\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803256 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run-ovn\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803289 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-additional-scripts\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803422 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803639 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-log-ovn\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.803725 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run-ovn\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.804213 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-additional-scripts\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.804850 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-scripts\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.824081 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4c72\" (UniqueName: \"kubernetes.io/projected/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-kube-api-access-v4c72\") pod \"ovn-controller-rwchz-config-9v2n7\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.824124 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rwchz" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.901760 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.932809 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-x6s7d" Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.933071 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-x6s7d" event={"ID":"40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3","Type":"ContainerDied","Data":"1f97d6a64f70d41e661a86d41b4c29baca98bfc6e559fe436d16cd50b549a65d"} Jan 30 08:27:14 crc kubenswrapper[4870]: I0130 08:27:14.933115 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f97d6a64f70d41e661a86d41b4c29baca98bfc6e559fe436d16cd50b549a65d" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.155779 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.213793 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.312614 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a2047-d3db-4822-89b3-52fcd65d6e09-operator-scripts\") pod \"585a2047-d3db-4822-89b3-52fcd65d6e09\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.312674 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zxg9\" (UniqueName: \"kubernetes.io/projected/585a2047-d3db-4822-89b3-52fcd65d6e09-kube-api-access-8zxg9\") pod \"585a2047-d3db-4822-89b3-52fcd65d6e09\" (UID: \"585a2047-d3db-4822-89b3-52fcd65d6e09\") " Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.313504 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585a2047-d3db-4822-89b3-52fcd65d6e09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "585a2047-d3db-4822-89b3-52fcd65d6e09" (UID: "585a2047-d3db-4822-89b3-52fcd65d6e09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.319060 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585a2047-d3db-4822-89b3-52fcd65d6e09-kube-api-access-8zxg9" (OuterVolumeSpecName: "kube-api-access-8zxg9") pod "585a2047-d3db-4822-89b3-52fcd65d6e09" (UID: "585a2047-d3db-4822-89b3-52fcd65d6e09"). InnerVolumeSpecName "kube-api-access-8zxg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.414841 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a2047-d3db-4822-89b3-52fcd65d6e09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.414907 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zxg9\" (UniqueName: \"kubernetes.io/projected/585a2047-d3db-4822-89b3-52fcd65d6e09-kube-api-access-8zxg9\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.431292 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rwchz-config-9v2n7"] Jan 30 08:27:15 crc kubenswrapper[4870]: W0130 08:27:15.585475 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c02c670_5f7d_4ee6_9072_e6e1ba2d6c61.slice/crio-158eec22a72dfc9360224d25636792800780cc5b6659cf69de830857fbfed11d WatchSource:0}: Error finding container 158eec22a72dfc9360224d25636792800780cc5b6659cf69de830857fbfed11d: Status 404 returned error can't find the container with id 158eec22a72dfc9360224d25636792800780cc5b6659cf69de830857fbfed11d Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.653472 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.653706 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="prometheus" containerID="cri-o://4c9a2cc96afb4697dcd9efa47cf237f34c2e2a0fb97e86a04bed4e71098a047b" gracePeriod=600 Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.653800 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="thanos-sidecar" containerID="cri-o://49386d341f6cc7754611b6a6c194cd8140385a7b91dc049f19bb791c005d6a86" gracePeriod=600 Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.653818 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="config-reloader" containerID="cri-o://0e6b5e7cf5bb76c5d4597f6eef6bb065c51cbad9cc2aae78711fd5e59b7109c8" gracePeriod=600 Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.960299 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a8a4-account-create-update-8gm2f" event={"ID":"585a2047-d3db-4822-89b3-52fcd65d6e09","Type":"ContainerDied","Data":"0045c9e1377a9668a3ada55443745aaffe10a93f434201f632113f32311c59ee"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.960648 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0045c9e1377a9668a3ada55443745aaffe10a93f434201f632113f32311c59ee" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.960394 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a8a4-account-create-update-8gm2f" Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.963233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"8f47ea6bf4d5583a27aaad61d251bd16077c3d6a9c987da87476d52e32cafb52"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.963279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"21d3f193095c3bf008b3c6fbc5e88d15f5178aea214282e57db0b0c4bce0d86b"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.964898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-9v2n7" event={"ID":"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61","Type":"ContainerStarted","Data":"d52e878ce9dd90e8dba444ebd6a2071ac79735b92b1f1220889d88eefcb18bc4"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.964928 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-9v2n7" event={"ID":"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61","Type":"ContainerStarted","Data":"158eec22a72dfc9360224d25636792800780cc5b6659cf69de830857fbfed11d"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.970371 4870 generic.go:334] "Generic (PLEG): container finished" podID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerID="49386d341f6cc7754611b6a6c194cd8140385a7b91dc049f19bb791c005d6a86" exitCode=0 Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.970398 4870 generic.go:334] "Generic (PLEG): container finished" podID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerID="0e6b5e7cf5bb76c5d4597f6eef6bb065c51cbad9cc2aae78711fd5e59b7109c8" exitCode=0 Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.970407 4870 generic.go:334] "Generic (PLEG): container finished" podID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerID="4c9a2cc96afb4697dcd9efa47cf237f34c2e2a0fb97e86a04bed4e71098a047b" exitCode=0 Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.970428 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerDied","Data":"49386d341f6cc7754611b6a6c194cd8140385a7b91dc049f19bb791c005d6a86"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.970453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerDied","Data":"0e6b5e7cf5bb76c5d4597f6eef6bb065c51cbad9cc2aae78711fd5e59b7109c8"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.970462 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerDied","Data":"4c9a2cc96afb4697dcd9efa47cf237f34c2e2a0fb97e86a04bed4e71098a047b"} Jan 30 08:27:15 crc kubenswrapper[4870]: I0130 08:27:15.983327 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rwchz-config-9v2n7" podStartSLOduration=1.983310267 podStartE2EDuration="1.983310267s" podCreationTimestamp="2026-01-30 08:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:15.979814603 +0000 UTC m=+1074.675361712" watchObservedRunningTime="2026-01-30 08:27:15.983310267 +0000 UTC m=+1074.678857376" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.083358 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377059c1-2286-4127-b4cc-d19ef6bac327" path="/var/lib/kubelet/pods/377059c1-2286-4127-b4cc-d19ef6bac327/volumes" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.277745 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7k8wj"] Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.282825 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7k8wj"] Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.364054 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cpgc6"] Jan 30 08:27:16 crc kubenswrapper[4870]: E0130 08:27:16.364373 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585a2047-d3db-4822-89b3-52fcd65d6e09" containerName="mariadb-account-create-update" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.364384 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="585a2047-d3db-4822-89b3-52fcd65d6e09" containerName="mariadb-account-create-update" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.364549 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="585a2047-d3db-4822-89b3-52fcd65d6e09" containerName="mariadb-account-create-update" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.365059 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.375195 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.379367 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cpgc6"] Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.547699 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bc3ddf0-5fc8-4425-a434-1452753e1297-operator-scripts\") pod \"root-account-create-update-cpgc6\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.547900 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9t6n\" (UniqueName: \"kubernetes.io/projected/8bc3ddf0-5fc8-4425-a434-1452753e1297-kube-api-access-f9t6n\") pod \"root-account-create-update-cpgc6\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.650618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bc3ddf0-5fc8-4425-a434-1452753e1297-operator-scripts\") pod \"root-account-create-update-cpgc6\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.650966 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9t6n\" (UniqueName: \"kubernetes.io/projected/8bc3ddf0-5fc8-4425-a434-1452753e1297-kube-api-access-f9t6n\") pod \"root-account-create-update-cpgc6\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.651612 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bc3ddf0-5fc8-4425-a434-1452753e1297-operator-scripts\") pod \"root-account-create-update-cpgc6\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.679739 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9t6n\" (UniqueName: \"kubernetes.io/projected/8bc3ddf0-5fc8-4425-a434-1452753e1297-kube-api-access-f9t6n\") pod \"root-account-create-update-cpgc6\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.698405 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.758340 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954553 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-tls-assets\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954693 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-0\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954738 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z5p8\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-kube-api-access-9z5p8\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954757 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-web-config\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954790 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-1\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954836 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-2\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954950 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.954984 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.955005 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config-out\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.955037 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-thanos-prometheus-http-client-file\") pod \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\" (UID: \"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7\") " Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.956751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.957103 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.957402 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.961000 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config-out" (OuterVolumeSpecName: "config-out") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.961609 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config" (OuterVolumeSpecName: "config") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.961697 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-kube-api-access-9z5p8" (OuterVolumeSpecName: "kube-api-access-9z5p8") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "kube-api-access-9z5p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.970983 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.973227 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.976524 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.983863 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-web-config" (OuterVolumeSpecName: "web-config") pod "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" (UID: "0410b897-4bd8-48aa-a9fd-8213f6d9dbd7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.985132 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0410b897-4bd8-48aa-a9fd-8213f6d9dbd7","Type":"ContainerDied","Data":"709385d651d4bcb103b7d7d0e2928451ab8b488203130eb1f7baa5322860b0f5"} Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.985172 4870 scope.go:117] "RemoveContainer" containerID="49386d341f6cc7754611b6a6c194cd8140385a7b91dc049f19bb791c005d6a86" Jan 30 08:27:16 crc kubenswrapper[4870]: I0130 08:27:16.985291 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.000400 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"28b3f6582d57c735880cfcf204b8d97b68c5ebf4315506c7cc59ffa463f6d05b"} Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.000673 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"ee6cfe59aa2bf3c554bfc7c37484cbebc1397bdc28a2e25c180f98d5d0aa78d6"} Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.000687 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"2c5634890c50294f161a731d472116f0b41f085293d4cc7bc69c2394ef3a2ae1"} Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.002470 4870 generic.go:334] "Generic (PLEG): container finished" podID="0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" containerID="d52e878ce9dd90e8dba444ebd6a2071ac79735b92b1f1220889d88eefcb18bc4" exitCode=0 Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.002508 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-9v2n7" event={"ID":"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61","Type":"ContainerDied","Data":"d52e878ce9dd90e8dba444ebd6a2071ac79735b92b1f1220889d88eefcb18bc4"} Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.020048 4870 scope.go:117] "RemoveContainer" containerID="0e6b5e7cf5bb76c5d4597f6eef6bb065c51cbad9cc2aae78711fd5e59b7109c8" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.046512 4870 scope.go:117] "RemoveContainer" containerID="4c9a2cc96afb4697dcd9efa47cf237f34c2e2a0fb97e86a04bed4e71098a047b" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056694 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z5p8\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-kube-api-access-9z5p8\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056724 4870 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-web-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056735 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056745 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056774 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") on node \"crc\" " Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056786 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056795 4870 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-config-out\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056803 4870 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056816 4870 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.056826 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.063772 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.071135 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.086209 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cpgc6"] Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.091533 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.091661 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1") on node "crc" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.098041 4870 scope.go:117] "RemoveContainer" containerID="431cfd92f94e1fd13cdf200e4b8c59047ac3e311acf24702741a42c672002d0e" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.106586 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:27:17 crc kubenswrapper[4870]: E0130 08:27:17.106971 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="config-reloader" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.106990 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="config-reloader" Jan 30 08:27:17 crc kubenswrapper[4870]: E0130 08:27:17.107014 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="prometheus" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.107022 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="prometheus" Jan 30 08:27:17 crc kubenswrapper[4870]: E0130 08:27:17.107036 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="thanos-sidecar" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.107042 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="thanos-sidecar" Jan 30 08:27:17 crc kubenswrapper[4870]: E0130 08:27:17.107054 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="init-config-reloader" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.107060 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="init-config-reloader" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.107208 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="thanos-sidecar" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.107220 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="config-reloader" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.107234 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" containerName="prometheus" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.108666 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.112975 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.113286 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.116560 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.116821 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.116827 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.117067 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.122906 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-88lql" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.123652 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.127616 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.139403 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.160632 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262317 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262382 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262403 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262421 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262451 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j9q\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-kube-api-access-f5j9q\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262472 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262506 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262525 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262596 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262630 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262661 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.262697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364066 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364119 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364140 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364159 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364180 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j9q\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-kube-api-access-f5j9q\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364200 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364222 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364239 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364260 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364283 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364316 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364346 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.364373 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.365524 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.366075 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.366100 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.367353 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.367381 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b608408b27cf3925c08af2a9b3a133a2b5eb87db3a290a5641371b0533b7f7d2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.369489 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.369683 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.369792 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.369819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.373544 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.374598 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.378063 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.382869 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j9q\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-kube-api-access-f5j9q\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.383381 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.406557 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.433317 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:17 crc kubenswrapper[4870]: I0130 08:27:17.824562 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:27:17 crc kubenswrapper[4870]: W0130 08:27:17.827559 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c8b2056_4db2_489e_b1d1_b201e38e84c8.slice/crio-290c2383e9eae83628bb57bb648be794756981366804a5738c5a44985dd7ad40 WatchSource:0}: Error finding container 290c2383e9eae83628bb57bb648be794756981366804a5738c5a44985dd7ad40: Status 404 returned error can't find the container with id 290c2383e9eae83628bb57bb648be794756981366804a5738c5a44985dd7ad40 Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.021855 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerStarted","Data":"290c2383e9eae83628bb57bb648be794756981366804a5738c5a44985dd7ad40"} Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.023534 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cpgc6" event={"ID":"8bc3ddf0-5fc8-4425-a434-1452753e1297","Type":"ContainerStarted","Data":"1e43a638833e8a28b17503377129992ef8df2c8dae8700c2567db5f0ab6b74f9"} Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.023555 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cpgc6" event={"ID":"8bc3ddf0-5fc8-4425-a434-1452753e1297","Type":"ContainerStarted","Data":"78139496b9b71f0c64108fce20cdfec939241b19eab6e4d8770978ff18162ccc"} Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.032552 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"16d47837901223c83d13a881667ca19a8d3f0bba47dd7740ca29b2b563faec06"} Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.032602 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"5c323511750406d5c784857d3013f82787d96dd94be71912f4d4b34888391b38"} Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.043861 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cpgc6" podStartSLOduration=2.043844274 podStartE2EDuration="2.043844274s" podCreationTimestamp="2026-01-30 08:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:18.043096391 +0000 UTC m=+1076.738643510" watchObservedRunningTime="2026-01-30 08:27:18.043844274 +0000 UTC m=+1076.739391383" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.085279 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0410b897-4bd8-48aa-a9fd-8213f6d9dbd7" path="/var/lib/kubelet/pods/0410b897-4bd8-48aa-a9fd-8213f6d9dbd7/volumes" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.086238 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85bfb16d-8b6c-46e2-a7e3-0a5051aa66df" path="/var/lib/kubelet/pods/85bfb16d-8b6c-46e2-a7e3-0a5051aa66df/volumes" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.464910 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.584980 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-scripts\") pod \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585082 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-additional-scripts\") pod \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585111 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4c72\" (UniqueName: \"kubernetes.io/projected/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-kube-api-access-v4c72\") pod \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585157 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run\") pod \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585177 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run-ovn\") pod \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585191 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-log-ovn\") pod \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\" (UID: \"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61\") " Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585503 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" (UID: "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585522 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run" (OuterVolumeSpecName: "var-run") pod "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" (UID: "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585536 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" (UID: "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.585950 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" (UID: "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.587174 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-scripts" (OuterVolumeSpecName: "scripts") pod "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" (UID: "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.593081 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-kube-api-access-v4c72" (OuterVolumeSpecName: "kube-api-access-v4c72") pod "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" (UID: "0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61"). InnerVolumeSpecName "kube-api-access-v4c72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.686822 4870 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.686852 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4c72\" (UniqueName: \"kubernetes.io/projected/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-kube-api-access-v4c72\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.686865 4870 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.686886 4870 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.686896 4870 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:18 crc kubenswrapper[4870]: I0130 08:27:18.686906 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.043294 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rwchz-config-9v2n7" event={"ID":"0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61","Type":"ContainerDied","Data":"158eec22a72dfc9360224d25636792800780cc5b6659cf69de830857fbfed11d"} Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.043342 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="158eec22a72dfc9360224d25636792800780cc5b6659cf69de830857fbfed11d" Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.043313 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rwchz-config-9v2n7" Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.044338 4870 generic.go:334] "Generic (PLEG): container finished" podID="8bc3ddf0-5fc8-4425-a434-1452753e1297" containerID="1e43a638833e8a28b17503377129992ef8df2c8dae8700c2567db5f0ab6b74f9" exitCode=0 Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.044383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cpgc6" event={"ID":"8bc3ddf0-5fc8-4425-a434-1452753e1297","Type":"ContainerDied","Data":"1e43a638833e8a28b17503377129992ef8df2c8dae8700c2567db5f0ab6b74f9"} Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.046561 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"9e3e6fdad343180c47bac108c7b2e0f3a756149d88bf4074b84e4a94ce06e882"} Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.046594 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"e2f2e57265bf8c97c3bd6727882b68f4dff4210478291f0099551840b35dab45"} Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.570354 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rwchz-config-9v2n7"] Jan 30 08:27:19 crc kubenswrapper[4870]: I0130 08:27:19.613173 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rwchz-config-9v2n7"] Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.058540 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"3fd30c252a3edaca8bae848a280736aa4cf37abe10eda7b9788b10dc738b3491"} Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.058585 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"0dbfe1e27c44cb16dd97a3681009f1b6376ea16bdf4fcb49366b4993ce26db3b"} Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.058595 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"0cf8afbd61720cfc11d46949eb9f90c46dd3cf0e6fb98475f3adbccb26a3d908"} Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.058604 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"a5b93a8945cb3e931297968ea316ce014efc23a929aea9a0c6eef33d025b0f77"} Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.085648 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" path="/var/lib/kubelet/pods/0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61/volumes" Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.393501 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.537711 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bc3ddf0-5fc8-4425-a434-1452753e1297-operator-scripts\") pod \"8bc3ddf0-5fc8-4425-a434-1452753e1297\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.537745 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9t6n\" (UniqueName: \"kubernetes.io/projected/8bc3ddf0-5fc8-4425-a434-1452753e1297-kube-api-access-f9t6n\") pod \"8bc3ddf0-5fc8-4425-a434-1452753e1297\" (UID: \"8bc3ddf0-5fc8-4425-a434-1452753e1297\") " Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.538568 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc3ddf0-5fc8-4425-a434-1452753e1297-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bc3ddf0-5fc8-4425-a434-1452753e1297" (UID: "8bc3ddf0-5fc8-4425-a434-1452753e1297"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.552023 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc3ddf0-5fc8-4425-a434-1452753e1297-kube-api-access-f9t6n" (OuterVolumeSpecName: "kube-api-access-f9t6n") pod "8bc3ddf0-5fc8-4425-a434-1452753e1297" (UID: "8bc3ddf0-5fc8-4425-a434-1452753e1297"). InnerVolumeSpecName "kube-api-access-f9t6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.639198 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bc3ddf0-5fc8-4425-a434-1452753e1297-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:20 crc kubenswrapper[4870]: I0130 08:27:20.639236 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9t6n\" (UniqueName: \"kubernetes.io/projected/8bc3ddf0-5fc8-4425-a434-1452753e1297-kube-api-access-f9t6n\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.066783 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerStarted","Data":"93e4e1345741b60dca904480e4327da8f596dec2a8d2178c87fe6d5632a2daeb"} Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.068924 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cpgc6" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.068867 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cpgc6" event={"ID":"8bc3ddf0-5fc8-4425-a434-1452753e1297","Type":"ContainerDied","Data":"78139496b9b71f0c64108fce20cdfec939241b19eab6e4d8770978ff18162ccc"} Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.069063 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78139496b9b71f0c64108fce20cdfec939241b19eab6e4d8770978ff18162ccc" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.075004 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"3c1d902d9b1767676c892638e464831c2bd80397f512d2b8f4c5f9e2d5490e79"} Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.075048 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"32972c3d0b0dd45d3e677908a123951df694031ac3230dbeddd921b379482ec6"} Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.075077 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"46634e41-7d5b-4181-b824-716bb37fca47","Type":"ContainerStarted","Data":"ddf3bd3df8ecee80792fc5a8f5a73068eecb83f791313d0a09626487f5d05403"} Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.159609 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.366932356 podStartE2EDuration="40.159584057s" podCreationTimestamp="2026-01-30 08:26:41 +0000 UTC" firstStartedPulling="2026-01-30 08:27:15.166302482 +0000 UTC m=+1073.861849591" lastFinishedPulling="2026-01-30 08:27:18.958954183 +0000 UTC m=+1077.654501292" observedRunningTime="2026-01-30 08:27:21.151459357 +0000 UTC m=+1079.847006476" watchObservedRunningTime="2026-01-30 08:27:21.159584057 +0000 UTC m=+1079.855131166" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.426020 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-wq2nt"] Jan 30 08:27:21 crc kubenswrapper[4870]: E0130 08:27:21.426382 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" containerName="ovn-config" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.426402 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" containerName="ovn-config" Jan 30 08:27:21 crc kubenswrapper[4870]: E0130 08:27:21.426423 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc3ddf0-5fc8-4425-a434-1452753e1297" containerName="mariadb-account-create-update" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.426432 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc3ddf0-5fc8-4425-a434-1452753e1297" containerName="mariadb-account-create-update" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.426624 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc3ddf0-5fc8-4425-a434-1452753e1297" containerName="mariadb-account-create-update" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.426655 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c02c670-5f7d-4ee6-9072-e6e1ba2d6c61" containerName="ovn-config" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.427567 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.429237 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.442789 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-wq2nt"] Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.554466 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mpz4\" (UniqueName: \"kubernetes.io/projected/c409417e-6b71-491c-b7c5-bf1a2b63baed-kube-api-access-5mpz4\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.554567 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-swift-storage-0\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.554585 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-sb\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.554604 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-config\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.554634 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-nb\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.554652 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-svc\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.655916 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mpz4\" (UniqueName: \"kubernetes.io/projected/c409417e-6b71-491c-b7c5-bf1a2b63baed-kube-api-access-5mpz4\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.656377 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-swift-storage-0\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.656404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-sb\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.657204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-swift-storage-0\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.657271 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-config\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.657826 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-config\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.657307 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-nb\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.657899 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-svc\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.657999 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-sb\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.658530 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-svc\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.658779 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-nb\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.675332 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mpz4\" (UniqueName: \"kubernetes.io/projected/c409417e-6b71-491c-b7c5-bf1a2b63baed-kube-api-access-5mpz4\") pod \"dnsmasq-dns-757cc9679f-wq2nt\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:21 crc kubenswrapper[4870]: I0130 08:27:21.743948 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:22 crc kubenswrapper[4870]: I0130 08:27:22.199244 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-wq2nt"] Jan 30 08:27:23 crc kubenswrapper[4870]: I0130 08:27:23.115712 4870 generic.go:334] "Generic (PLEG): container finished" podID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerID="0bc8a7090e4dbce528560fe9634da361240a61440f10f7e3c579fda9915e352a" exitCode=0 Jan 30 08:27:23 crc kubenswrapper[4870]: I0130 08:27:23.115996 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" event={"ID":"c409417e-6b71-491c-b7c5-bf1a2b63baed","Type":"ContainerDied","Data":"0bc8a7090e4dbce528560fe9634da361240a61440f10f7e3c579fda9915e352a"} Jan 30 08:27:23 crc kubenswrapper[4870]: I0130 08:27:23.116020 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" event={"ID":"c409417e-6b71-491c-b7c5-bf1a2b63baed","Type":"ContainerStarted","Data":"27e7324b5df73e3d0d4ead3bf3867e6cd08ef4e1f33f8795d331a5b682f586af"} Jan 30 08:27:24 crc kubenswrapper[4870]: I0130 08:27:24.125558 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" event={"ID":"c409417e-6b71-491c-b7c5-bf1a2b63baed","Type":"ContainerStarted","Data":"72ef45c7c24f8c5fc6788a5862241b333000ac3fd13dcb6350ad2553d12d13f8"} Jan 30 08:27:24 crc kubenswrapper[4870]: I0130 08:27:24.126024 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:24 crc kubenswrapper[4870]: I0130 08:27:24.151249 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" podStartSLOduration=3.151230128 podStartE2EDuration="3.151230128s" podCreationTimestamp="2026-01-30 08:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:24.142590502 +0000 UTC m=+1082.838137631" watchObservedRunningTime="2026-01-30 08:27:24.151230128 +0000 UTC m=+1082.846777257" Jan 30 08:27:25 crc kubenswrapper[4870]: I0130 08:27:25.514673 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 30 08:27:25 crc kubenswrapper[4870]: I0130 08:27:25.870949 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 30 08:27:26 crc kubenswrapper[4870]: I0130 08:27:26.230764 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="2ab884a9-b47a-476a-8f89-140093b96527" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 30 08:27:27 crc kubenswrapper[4870]: I0130 08:27:27.160441 4870 generic.go:334] "Generic (PLEG): container finished" podID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerID="93e4e1345741b60dca904480e4327da8f596dec2a8d2178c87fe6d5632a2daeb" exitCode=0 Jan 30 08:27:27 crc kubenswrapper[4870]: I0130 08:27:27.160499 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerDied","Data":"93e4e1345741b60dca904480e4327da8f596dec2a8d2178c87fe6d5632a2daeb"} Jan 30 08:27:28 crc kubenswrapper[4870]: I0130 08:27:28.170905 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerStarted","Data":"ea102a1406731d57700d5196e250072c2053fa2345212a3d6975e629610cb94c"} Jan 30 08:27:31 crc kubenswrapper[4870]: I0130 08:27:31.746715 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:27:31 crc kubenswrapper[4870]: I0130 08:27:31.936231 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-r6b9m"] Jan 30 08:27:31 crc kubenswrapper[4870]: I0130 08:27:31.936500 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerName="dnsmasq-dns" containerID="cri-o://137ef8da742a762887455130866543407aab4e626fc693e72bbf0ba327725c4f" gracePeriod=10 Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.214776 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerStarted","Data":"d46fd5e887baec843bdd4f9f0254772bf4dc50323e052cd052dd2ea4657b7397"} Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.214954 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerStarted","Data":"a9347a512a592b79cb85be5a5a664bfadec21fed65bd7eacf1a97eb008166eb1"} Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.228354 4870 generic.go:334] "Generic (PLEG): container finished" podID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerID="137ef8da742a762887455130866543407aab4e626fc693e72bbf0ba327725c4f" exitCode=0 Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.228389 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" event={"ID":"8cd19c31-4252-4de7-a673-9da7aedcb785","Type":"ContainerDied","Data":"137ef8da742a762887455130866543407aab4e626fc693e72bbf0ba327725c4f"} Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.242159 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.24214519 podStartE2EDuration="15.24214519s" podCreationTimestamp="2026-01-30 08:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:32.240653295 +0000 UTC m=+1090.936200404" watchObservedRunningTime="2026-01-30 08:27:32.24214519 +0000 UTC m=+1090.937692299" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.373752 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.434214 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.434271 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.454329 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.468977 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-nb\") pod \"8cd19c31-4252-4de7-a673-9da7aedcb785\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.469029 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-dns-svc\") pod \"8cd19c31-4252-4de7-a673-9da7aedcb785\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.469071 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-sb\") pod \"8cd19c31-4252-4de7-a673-9da7aedcb785\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.469206 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx8qz\" (UniqueName: \"kubernetes.io/projected/8cd19c31-4252-4de7-a673-9da7aedcb785-kube-api-access-cx8qz\") pod \"8cd19c31-4252-4de7-a673-9da7aedcb785\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.469271 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-config\") pod \"8cd19c31-4252-4de7-a673-9da7aedcb785\" (UID: \"8cd19c31-4252-4de7-a673-9da7aedcb785\") " Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.492600 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd19c31-4252-4de7-a673-9da7aedcb785-kube-api-access-cx8qz" (OuterVolumeSpecName: "kube-api-access-cx8qz") pod "8cd19c31-4252-4de7-a673-9da7aedcb785" (UID: "8cd19c31-4252-4de7-a673-9da7aedcb785"). InnerVolumeSpecName "kube-api-access-cx8qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.524677 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8cd19c31-4252-4de7-a673-9da7aedcb785" (UID: "8cd19c31-4252-4de7-a673-9da7aedcb785"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.535343 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8cd19c31-4252-4de7-a673-9da7aedcb785" (UID: "8cd19c31-4252-4de7-a673-9da7aedcb785"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.537855 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-config" (OuterVolumeSpecName: "config") pod "8cd19c31-4252-4de7-a673-9da7aedcb785" (UID: "8cd19c31-4252-4de7-a673-9da7aedcb785"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.547151 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8cd19c31-4252-4de7-a673-9da7aedcb785" (UID: "8cd19c31-4252-4de7-a673-9da7aedcb785"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.570830 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.570861 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.570884 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.570894 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx8qz\" (UniqueName: \"kubernetes.io/projected/8cd19c31-4252-4de7-a673-9da7aedcb785-kube-api-access-cx8qz\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:32 crc kubenswrapper[4870]: I0130 08:27:32.570905 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd19c31-4252-4de7-a673-9da7aedcb785-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.243239 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" event={"ID":"8cd19c31-4252-4de7-a673-9da7aedcb785","Type":"ContainerDied","Data":"374b146ad8265eb6041ff5f1143dd86432961e72a020212acd84189e8d8f2978"} Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.243742 4870 scope.go:117] "RemoveContainer" containerID="137ef8da742a762887455130866543407aab4e626fc693e72bbf0ba327725c4f" Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.243283 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cc6fcf45-r6b9m" Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.252880 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.279493 4870 scope.go:117] "RemoveContainer" containerID="fa0e8e29630ac45ae5392bdda60293a38298eb7a8fb05baa4e216154fe19f932" Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.354205 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-r6b9m"] Jan 30 08:27:33 crc kubenswrapper[4870]: I0130 08:27:33.365682 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65cc6fcf45-r6b9m"] Jan 30 08:27:34 crc kubenswrapper[4870]: I0130 08:27:34.091103 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" path="/var/lib/kubelet/pods/8cd19c31-4252-4de7-a673-9da7aedcb785/volumes" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.515287 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.870053 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.885107 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kqrrr"] Jan 30 08:27:35 crc kubenswrapper[4870]: E0130 08:27:35.885418 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerName="dnsmasq-dns" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.885435 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerName="dnsmasq-dns" Jan 30 08:27:35 crc kubenswrapper[4870]: E0130 08:27:35.885449 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerName="init" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.885456 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerName="init" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.885599 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd19c31-4252-4de7-a673-9da7aedcb785" containerName="dnsmasq-dns" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.886146 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.897829 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kqrrr"] Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.946602 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46507-531f-4d06-86d9-6c07a50abc6d-operator-scripts\") pod \"cinder-db-create-kqrrr\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.946662 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpk7\" (UniqueName: \"kubernetes.io/projected/59f46507-531f-4d06-86d9-6c07a50abc6d-kube-api-access-tkpk7\") pod \"cinder-db-create-kqrrr\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.978546 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6lzp5"] Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.982665 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:35 crc kubenswrapper[4870]: I0130 08:27:35.988814 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6lzp5"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.047944 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d425622-da05-4988-a059-013c06b4ecf1-operator-scripts\") pod \"barbican-db-create-6lzp5\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.047991 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46507-531f-4d06-86d9-6c07a50abc6d-operator-scripts\") pod \"cinder-db-create-kqrrr\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.048028 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpk7\" (UniqueName: \"kubernetes.io/projected/59f46507-531f-4d06-86d9-6c07a50abc6d-kube-api-access-tkpk7\") pod \"cinder-db-create-kqrrr\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.048061 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlw25\" (UniqueName: \"kubernetes.io/projected/4d425622-da05-4988-a059-013c06b4ecf1-kube-api-access-vlw25\") pod \"barbican-db-create-6lzp5\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.048671 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46507-531f-4d06-86d9-6c07a50abc6d-operator-scripts\") pod \"cinder-db-create-kqrrr\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.072533 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpk7\" (UniqueName: \"kubernetes.io/projected/59f46507-531f-4d06-86d9-6c07a50abc6d-kube-api-access-tkpk7\") pod \"cinder-db-create-kqrrr\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.119791 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-937e-account-create-update-6w49r"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.120984 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.131404 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-937e-account-create-update-6w49r"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.131542 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.148993 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htvx9\" (UniqueName: \"kubernetes.io/projected/17e1f740-4393-4ba2-8242-fb863196cb02-kube-api-access-htvx9\") pod \"cinder-937e-account-create-update-6w49r\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.149064 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlw25\" (UniqueName: \"kubernetes.io/projected/4d425622-da05-4988-a059-013c06b4ecf1-kube-api-access-vlw25\") pod \"barbican-db-create-6lzp5\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.149161 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17e1f740-4393-4ba2-8242-fb863196cb02-operator-scripts\") pod \"cinder-937e-account-create-update-6w49r\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.149231 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d425622-da05-4988-a059-013c06b4ecf1-operator-scripts\") pod \"barbican-db-create-6lzp5\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.149909 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d425622-da05-4988-a059-013c06b4ecf1-operator-scripts\") pod \"barbican-db-create-6lzp5\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.181513 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlw25\" (UniqueName: \"kubernetes.io/projected/4d425622-da05-4988-a059-013c06b4ecf1-kube-api-access-vlw25\") pod \"barbican-db-create-6lzp5\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.181577 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-f6r68"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.182623 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.186849 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.187132 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vn7b5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.187253 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.187519 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.191194 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f6r68"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.197499 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0515-account-create-update-rln5d"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.198524 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.200452 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.200914 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.207333 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0515-account-create-update-rln5d"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.231108 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250415 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19155d05-01da-4e21-96c2-f23662f8f785-operator-scripts\") pod \"barbican-0515-account-create-update-rln5d\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250480 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17e1f740-4393-4ba2-8242-fb863196cb02-operator-scripts\") pod \"cinder-937e-account-create-update-6w49r\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250501 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-config-data\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250579 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78lm\" (UniqueName: \"kubernetes.io/projected/19155d05-01da-4e21-96c2-f23662f8f785-kube-api-access-d78lm\") pod \"barbican-0515-account-create-update-rln5d\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250624 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknsc\" (UniqueName: \"kubernetes.io/projected/881527d5-776b-4639-9306-895d1e370abd-kube-api-access-wknsc\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250644 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htvx9\" (UniqueName: \"kubernetes.io/projected/17e1f740-4393-4ba2-8242-fb863196cb02-kube-api-access-htvx9\") pod \"cinder-937e-account-create-update-6w49r\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.250691 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-combined-ca-bundle\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.251968 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17e1f740-4393-4ba2-8242-fb863196cb02-operator-scripts\") pod \"cinder-937e-account-create-update-6w49r\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.275347 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htvx9\" (UniqueName: \"kubernetes.io/projected/17e1f740-4393-4ba2-8242-fb863196cb02-kube-api-access-htvx9\") pod \"cinder-937e-account-create-update-6w49r\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.296507 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.351958 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d78lm\" (UniqueName: \"kubernetes.io/projected/19155d05-01da-4e21-96c2-f23662f8f785-kube-api-access-d78lm\") pod \"barbican-0515-account-create-update-rln5d\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.352051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknsc\" (UniqueName: \"kubernetes.io/projected/881527d5-776b-4639-9306-895d1e370abd-kube-api-access-wknsc\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.352105 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-combined-ca-bundle\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.352186 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19155d05-01da-4e21-96c2-f23662f8f785-operator-scripts\") pod \"barbican-0515-account-create-update-rln5d\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.352222 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-config-data\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.353066 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19155d05-01da-4e21-96c2-f23662f8f785-operator-scripts\") pod \"barbican-0515-account-create-update-rln5d\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.359064 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-config-data\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.363769 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-combined-ca-bundle\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.377822 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknsc\" (UniqueName: \"kubernetes.io/projected/881527d5-776b-4639-9306-895d1e370abd-kube-api-access-wknsc\") pod \"keystone-db-sync-f6r68\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.393509 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d78lm\" (UniqueName: \"kubernetes.io/projected/19155d05-01da-4e21-96c2-f23662f8f785-kube-api-access-d78lm\") pod \"barbican-0515-account-create-update-rln5d\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.444822 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.544273 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.617237 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.792817 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kqrrr"] Jan 30 08:27:36 crc kubenswrapper[4870]: I0130 08:27:36.857503 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6lzp5"] Jan 30 08:27:36 crc kubenswrapper[4870]: W0130 08:27:36.873362 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d425622_da05_4988_a059_013c06b4ecf1.slice/crio-443c9665d6a75766dd0f1a996e3ba08bc212823b23adcfb2048b16772860a938 WatchSource:0}: Error finding container 443c9665d6a75766dd0f1a996e3ba08bc212823b23adcfb2048b16772860a938: Status 404 returned error can't find the container with id 443c9665d6a75766dd0f1a996e3ba08bc212823b23adcfb2048b16772860a938 Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.044715 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0515-account-create-update-rln5d"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.072269 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-937e-account-create-update-6w49r"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.172360 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f6r68"] Jan 30 08:27:38 crc kubenswrapper[4870]: W0130 08:27:37.186541 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod881527d5_776b_4639_9306_895d1e370abd.slice/crio-a31484a12f791686023b40843494f6f2ece996f4e78a4c51e74f6a74ad7d512e WatchSource:0}: Error finding container a31484a12f791686023b40843494f6f2ece996f4e78a4c51e74f6a74ad7d512e: Status 404 returned error can't find the container with id a31484a12f791686023b40843494f6f2ece996f4e78a4c51e74f6a74ad7d512e Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.298390 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f6r68" event={"ID":"881527d5-776b-4639-9306-895d1e370abd","Type":"ContainerStarted","Data":"a31484a12f791686023b40843494f6f2ece996f4e78a4c51e74f6a74ad7d512e"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.307849 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-937e-account-create-update-6w49r" event={"ID":"17e1f740-4393-4ba2-8242-fb863196cb02","Type":"ContainerStarted","Data":"b9684209c89c1359375b31da44bbfc4622187e78936d27a08d572996df752ae7"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.311747 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lzp5" event={"ID":"4d425622-da05-4988-a059-013c06b4ecf1","Type":"ContainerStarted","Data":"ce2685881a857cd53a444b13c2f7aef4bd6f5c6b26f0a8cbc8a0c60a7f826c60"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.311785 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lzp5" event={"ID":"4d425622-da05-4988-a059-013c06b4ecf1","Type":"ContainerStarted","Data":"443c9665d6a75766dd0f1a996e3ba08bc212823b23adcfb2048b16772860a938"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.314531 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0515-account-create-update-rln5d" event={"ID":"19155d05-01da-4e21-96c2-f23662f8f785","Type":"ContainerStarted","Data":"3133fff318af851e54e7933fe20396b64a992bab1d22e8aca788bbc77160af37"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.317073 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqrrr" event={"ID":"59f46507-531f-4d06-86d9-6c07a50abc6d","Type":"ContainerStarted","Data":"c19fa8ba72448fbf848d632a4b2c87c38ba00d3573897003f02d36a9263593ff"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.317106 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqrrr" event={"ID":"59f46507-531f-4d06-86d9-6c07a50abc6d","Type":"ContainerStarted","Data":"f141a3af47237c42daeb2dac21dd12e35e4144a583e7317c4500c4b27418e250"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.335137 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-6lzp5" podStartSLOduration=2.335118903 podStartE2EDuration="2.335118903s" podCreationTimestamp="2026-01-30 08:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:37.326219861 +0000 UTC m=+1096.021766970" watchObservedRunningTime="2026-01-30 08:27:37.335118903 +0000 UTC m=+1096.030666012" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:37.352483 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-kqrrr" podStartSLOduration=2.352465547 podStartE2EDuration="2.352465547s" podCreationTimestamp="2026-01-30 08:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:37.348463069 +0000 UTC m=+1096.044010188" watchObservedRunningTime="2026-01-30 08:27:37.352465547 +0000 UTC m=+1096.048012656" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.326679 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-937e-account-create-update-6w49r" event={"ID":"17e1f740-4393-4ba2-8242-fb863196cb02","Type":"ContainerStarted","Data":"ebb6defef32112bcd4f761a254fe06dd72ca1e2b11d0f09023e3983d12f747be"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.328626 4870 generic.go:334] "Generic (PLEG): container finished" podID="4d425622-da05-4988-a059-013c06b4ecf1" containerID="ce2685881a857cd53a444b13c2f7aef4bd6f5c6b26f0a8cbc8a0c60a7f826c60" exitCode=0 Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.328703 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lzp5" event={"ID":"4d425622-da05-4988-a059-013c06b4ecf1","Type":"ContainerDied","Data":"ce2685881a857cd53a444b13c2f7aef4bd6f5c6b26f0a8cbc8a0c60a7f826c60"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.330677 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0515-account-create-update-rln5d" event={"ID":"19155d05-01da-4e21-96c2-f23662f8f785","Type":"ContainerStarted","Data":"6dcb2a606401562e049d19a34d68af34e28fc99c34413a4f7cfddf60bc5211ee"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.334378 4870 generic.go:334] "Generic (PLEG): container finished" podID="59f46507-531f-4d06-86d9-6c07a50abc6d" containerID="c19fa8ba72448fbf848d632a4b2c87c38ba00d3573897003f02d36a9263593ff" exitCode=0 Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.334418 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqrrr" event={"ID":"59f46507-531f-4d06-86d9-6c07a50abc6d","Type":"ContainerDied","Data":"c19fa8ba72448fbf848d632a4b2c87c38ba00d3573897003f02d36a9263593ff"} Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.357435 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-937e-account-create-update-6w49r" podStartSLOduration=2.357420316 podStartE2EDuration="2.357420316s" podCreationTimestamp="2026-01-30 08:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:38.342205436 +0000 UTC m=+1097.037752545" watchObservedRunningTime="2026-01-30 08:27:38.357420316 +0000 UTC m=+1097.052967425" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.395013 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0515-account-create-update-rln5d" podStartSLOduration=2.3949898689999998 podStartE2EDuration="2.394989869s" podCreationTimestamp="2026-01-30 08:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:38.390809754 +0000 UTC m=+1097.086356863" watchObservedRunningTime="2026-01-30 08:27:38.394989869 +0000 UTC m=+1097.090536978" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.476221 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8td6r"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.477266 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.486791 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8td6r"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.541535 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-gbfzh"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.542649 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.547506 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.547824 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-b9kpk" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.557355 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-gbfzh"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.603549 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc35112-b552-434a-b702-26c53cbf5574-operator-scripts\") pod \"glance-db-create-8td6r\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.603597 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4zh\" (UniqueName: \"kubernetes.io/projected/dfc35112-b552-434a-b702-26c53cbf5574-kube-api-access-8f4zh\") pod \"glance-db-create-8td6r\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.612053 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9d1f-account-create-update-mffzg"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.613379 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.617465 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.638864 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9d1f-account-create-update-mffzg"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.689000 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xrsjh"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.690102 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.695376 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xrsjh"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742233 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42t4\" (UniqueName: \"kubernetes.io/projected/051874aa-a01e-40bf-a987-a830886ea878-kube-api-access-b42t4\") pod \"neutron-db-create-xrsjh\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742386 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc35112-b552-434a-b702-26c53cbf5574-operator-scripts\") pod \"glance-db-create-8td6r\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742413 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-combined-ca-bundle\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742447 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051874aa-a01e-40bf-a987-a830886ea878-operator-scripts\") pod \"neutron-db-create-xrsjh\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742501 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4zh\" (UniqueName: \"kubernetes.io/projected/dfc35112-b552-434a-b702-26c53cbf5574-kube-api-access-8f4zh\") pod \"glance-db-create-8td6r\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742544 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-config-data\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742594 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-db-sync-config-data\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742645 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb61b735-bf9c-4bf5-a5cf-1948435af72e-operator-scripts\") pod \"glance-9d1f-account-create-update-mffzg\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742702 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbn2\" (UniqueName: \"kubernetes.io/projected/e8637667-8b7e-455e-8ba9-b6291574e4ce-kube-api-access-xfbn2\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.742776 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5894m\" (UniqueName: \"kubernetes.io/projected/eb61b735-bf9c-4bf5-a5cf-1948435af72e-kube-api-access-5894m\") pod \"glance-9d1f-account-create-update-mffzg\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.743817 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc35112-b552-434a-b702-26c53cbf5574-operator-scripts\") pod \"glance-db-create-8td6r\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.778625 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6de9-account-create-update-nwcgl"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.780112 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.796414 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.796458 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4zh\" (UniqueName: \"kubernetes.io/projected/dfc35112-b552-434a-b702-26c53cbf5574-kube-api-access-8f4zh\") pod \"glance-db-create-8td6r\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.809237 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6de9-account-create-update-nwcgl"] Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.846372 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb61b735-bf9c-4bf5-a5cf-1948435af72e-operator-scripts\") pod \"glance-9d1f-account-create-update-mffzg\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.846648 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbn2\" (UniqueName: \"kubernetes.io/projected/e8637667-8b7e-455e-8ba9-b6291574e4ce-kube-api-access-xfbn2\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.846820 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5894m\" (UniqueName: \"kubernetes.io/projected/eb61b735-bf9c-4bf5-a5cf-1948435af72e-kube-api-access-5894m\") pod \"glance-9d1f-account-create-update-mffzg\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.847896 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b42t4\" (UniqueName: \"kubernetes.io/projected/051874aa-a01e-40bf-a987-a830886ea878-kube-api-access-b42t4\") pod \"neutron-db-create-xrsjh\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.848051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-combined-ca-bundle\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.848151 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051874aa-a01e-40bf-a987-a830886ea878-operator-scripts\") pod \"neutron-db-create-xrsjh\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.848246 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-config-data\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.848328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-db-sync-config-data\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.848594 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb61b735-bf9c-4bf5-a5cf-1948435af72e-operator-scripts\") pod \"glance-9d1f-account-create-update-mffzg\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.849595 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051874aa-a01e-40bf-a987-a830886ea878-operator-scripts\") pod \"neutron-db-create-xrsjh\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.857839 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-config-data\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.859127 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-db-sync-config-data\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.863628 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-combined-ca-bundle\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.869324 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5894m\" (UniqueName: \"kubernetes.io/projected/eb61b735-bf9c-4bf5-a5cf-1948435af72e-kube-api-access-5894m\") pod \"glance-9d1f-account-create-update-mffzg\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.872881 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b42t4\" (UniqueName: \"kubernetes.io/projected/051874aa-a01e-40bf-a987-a830886ea878-kube-api-access-b42t4\") pod \"neutron-db-create-xrsjh\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.875103 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8td6r" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.877460 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbn2\" (UniqueName: \"kubernetes.io/projected/e8637667-8b7e-455e-8ba9-b6291574e4ce-kube-api-access-xfbn2\") pod \"watcher-db-sync-gbfzh\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.886503 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.952792 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6566e49-850d-460e-9a22-9bfd7384f494-operator-scripts\") pod \"neutron-6de9-account-create-update-nwcgl\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.953834 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq87\" (UniqueName: \"kubernetes.io/projected/b6566e49-850d-460e-9a22-9bfd7384f494-kube-api-access-nmq87\") pod \"neutron-6de9-account-create-update-nwcgl\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:38 crc kubenswrapper[4870]: I0130 08:27:38.956862 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.055091 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6566e49-850d-460e-9a22-9bfd7384f494-operator-scripts\") pod \"neutron-6de9-account-create-update-nwcgl\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.055210 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmq87\" (UniqueName: \"kubernetes.io/projected/b6566e49-850d-460e-9a22-9bfd7384f494-kube-api-access-nmq87\") pod \"neutron-6de9-account-create-update-nwcgl\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.055820 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6566e49-850d-460e-9a22-9bfd7384f494-operator-scripts\") pod \"neutron-6de9-account-create-update-nwcgl\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.058161 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.082686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmq87\" (UniqueName: \"kubernetes.io/projected/b6566e49-850d-460e-9a22-9bfd7384f494-kube-api-access-nmq87\") pod \"neutron-6de9-account-create-update-nwcgl\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.130585 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.213515 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8td6r"] Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.350479 4870 generic.go:334] "Generic (PLEG): container finished" podID="17e1f740-4393-4ba2-8242-fb863196cb02" containerID="ebb6defef32112bcd4f761a254fe06dd72ca1e2b11d0f09023e3983d12f747be" exitCode=0 Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.350519 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-937e-account-create-update-6w49r" event={"ID":"17e1f740-4393-4ba2-8242-fb863196cb02","Type":"ContainerDied","Data":"ebb6defef32112bcd4f761a254fe06dd72ca1e2b11d0f09023e3983d12f747be"} Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.352568 4870 generic.go:334] "Generic (PLEG): container finished" podID="19155d05-01da-4e21-96c2-f23662f8f785" containerID="6dcb2a606401562e049d19a34d68af34e28fc99c34413a4f7cfddf60bc5211ee" exitCode=0 Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.352633 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0515-account-create-update-rln5d" event={"ID":"19155d05-01da-4e21-96c2-f23662f8f785","Type":"ContainerDied","Data":"6dcb2a606401562e049d19a34d68af34e28fc99c34413a4f7cfddf60bc5211ee"} Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.357190 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8td6r" event={"ID":"dfc35112-b552-434a-b702-26c53cbf5574","Type":"ContainerStarted","Data":"d42477892b3ddaabcfbb181315d9bfde068e17d9e265081ee29d48545444f115"} Jan 30 08:27:39 crc kubenswrapper[4870]: W0130 08:27:39.480991 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8637667_8b7e_455e_8ba9_b6291574e4ce.slice/crio-25fc46aae9550fd891b36bb65004fc5f285237547119046fa6127a16efed7dfd WatchSource:0}: Error finding container 25fc46aae9550fd891b36bb65004fc5f285237547119046fa6127a16efed7dfd: Status 404 returned error can't find the container with id 25fc46aae9550fd891b36bb65004fc5f285237547119046fa6127a16efed7dfd Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.487731 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-gbfzh"] Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.526622 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6de9-account-create-update-nwcgl"] Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.537557 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9d1f-account-create-update-mffzg"] Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.622269 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xrsjh"] Jan 30 08:27:39 crc kubenswrapper[4870]: I0130 08:27:39.983225 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.010928 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.089107 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46507-531f-4d06-86d9-6c07a50abc6d-operator-scripts\") pod \"59f46507-531f-4d06-86d9-6c07a50abc6d\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.089204 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkpk7\" (UniqueName: \"kubernetes.io/projected/59f46507-531f-4d06-86d9-6c07a50abc6d-kube-api-access-tkpk7\") pod \"59f46507-531f-4d06-86d9-6c07a50abc6d\" (UID: \"59f46507-531f-4d06-86d9-6c07a50abc6d\") " Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.089292 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d425622-da05-4988-a059-013c06b4ecf1-operator-scripts\") pod \"4d425622-da05-4988-a059-013c06b4ecf1\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.089368 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlw25\" (UniqueName: \"kubernetes.io/projected/4d425622-da05-4988-a059-013c06b4ecf1-kube-api-access-vlw25\") pod \"4d425622-da05-4988-a059-013c06b4ecf1\" (UID: \"4d425622-da05-4988-a059-013c06b4ecf1\") " Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.090006 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f46507-531f-4d06-86d9-6c07a50abc6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59f46507-531f-4d06-86d9-6c07a50abc6d" (UID: "59f46507-531f-4d06-86d9-6c07a50abc6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.090709 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d425622-da05-4988-a059-013c06b4ecf1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d425622-da05-4988-a059-013c06b4ecf1" (UID: "4d425622-da05-4988-a059-013c06b4ecf1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.094705 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f46507-531f-4d06-86d9-6c07a50abc6d-kube-api-access-tkpk7" (OuterVolumeSpecName: "kube-api-access-tkpk7") pod "59f46507-531f-4d06-86d9-6c07a50abc6d" (UID: "59f46507-531f-4d06-86d9-6c07a50abc6d"). InnerVolumeSpecName "kube-api-access-tkpk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.096509 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d425622-da05-4988-a059-013c06b4ecf1-kube-api-access-vlw25" (OuterVolumeSpecName: "kube-api-access-vlw25") pod "4d425622-da05-4988-a059-013c06b4ecf1" (UID: "4d425622-da05-4988-a059-013c06b4ecf1"). InnerVolumeSpecName "kube-api-access-vlw25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.191258 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkpk7\" (UniqueName: \"kubernetes.io/projected/59f46507-531f-4d06-86d9-6c07a50abc6d-kube-api-access-tkpk7\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.191292 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d425622-da05-4988-a059-013c06b4ecf1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.191306 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlw25\" (UniqueName: \"kubernetes.io/projected/4d425622-da05-4988-a059-013c06b4ecf1-kube-api-access-vlw25\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.191315 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46507-531f-4d06-86d9-6c07a50abc6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.375023 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xrsjh" event={"ID":"051874aa-a01e-40bf-a987-a830886ea878","Type":"ContainerStarted","Data":"cb21d97629ca986697c0491abe322efbe6175a053c82fd76648eaa5b827fb2cc"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.375074 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xrsjh" event={"ID":"051874aa-a01e-40bf-a987-a830886ea878","Type":"ContainerStarted","Data":"6af75c88e6c19044721841617ae2fbc93efc4be1bd4b334456132a0a6bec8e0f"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.381179 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gbfzh" event={"ID":"e8637667-8b7e-455e-8ba9-b6291574e4ce","Type":"ContainerStarted","Data":"25fc46aae9550fd891b36bb65004fc5f285237547119046fa6127a16efed7dfd"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.396503 4870 generic.go:334] "Generic (PLEG): container finished" podID="dfc35112-b552-434a-b702-26c53cbf5574" containerID="b1933043ebcbf2051360c783e7b0fa2a563a6c4cee962802cf9d526f5fcd348c" exitCode=0 Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.396877 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8td6r" event={"ID":"dfc35112-b552-434a-b702-26c53cbf5574","Type":"ContainerDied","Data":"b1933043ebcbf2051360c783e7b0fa2a563a6c4cee962802cf9d526f5fcd348c"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.401658 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lzp5" event={"ID":"4d425622-da05-4988-a059-013c06b4ecf1","Type":"ContainerDied","Data":"443c9665d6a75766dd0f1a996e3ba08bc212823b23adcfb2048b16772860a938"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.401685 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443c9665d6a75766dd0f1a996e3ba08bc212823b23adcfb2048b16772860a938" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.401726 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lzp5" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.403319 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9d1f-account-create-update-mffzg" event={"ID":"eb61b735-bf9c-4bf5-a5cf-1948435af72e","Type":"ContainerStarted","Data":"a86a16c99cbecfe80af026afdf8bc6eec15eafc6658e69be7a63babe7a18aa00"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.403344 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9d1f-account-create-update-mffzg" event={"ID":"eb61b735-bf9c-4bf5-a5cf-1948435af72e","Type":"ContainerStarted","Data":"784b44f85ae007ee5455ec0b1b0eefd1ddd8dc08aafa33e4bdb7deee1e8d44f2"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.405937 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqrrr" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.405953 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqrrr" event={"ID":"59f46507-531f-4d06-86d9-6c07a50abc6d","Type":"ContainerDied","Data":"f141a3af47237c42daeb2dac21dd12e35e4144a583e7317c4500c4b27418e250"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.406008 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f141a3af47237c42daeb2dac21dd12e35e4144a583e7317c4500c4b27418e250" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.410776 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-xrsjh" podStartSLOduration=2.41075229 podStartE2EDuration="2.41075229s" podCreationTimestamp="2026-01-30 08:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:40.393232851 +0000 UTC m=+1099.088779960" watchObservedRunningTime="2026-01-30 08:27:40.41075229 +0000 UTC m=+1099.106299399" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.414105 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6de9-account-create-update-nwcgl" event={"ID":"b6566e49-850d-460e-9a22-9bfd7384f494","Type":"ContainerStarted","Data":"96c828944b59ded4cdb603b725476894266a7134b9d788fbea6f5b49b309942a"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.415409 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6de9-account-create-update-nwcgl" event={"ID":"b6566e49-850d-460e-9a22-9bfd7384f494","Type":"ContainerStarted","Data":"ffc742f6a81620434ff82836c4c9e4cd30220b47560730abe3ffaf861348054f"} Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.432735 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9d1f-account-create-update-mffzg" podStartSLOduration=2.43271721 podStartE2EDuration="2.43271721s" podCreationTimestamp="2026-01-30 08:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:40.420191759 +0000 UTC m=+1099.115738868" watchObservedRunningTime="2026-01-30 08:27:40.43271721 +0000 UTC m=+1099.128264319" Jan 30 08:27:40 crc kubenswrapper[4870]: I0130 08:27:40.451352 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6de9-account-create-update-nwcgl" podStartSLOduration=2.45131723 podStartE2EDuration="2.45131723s" podCreationTimestamp="2026-01-30 08:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:40.440248793 +0000 UTC m=+1099.135795902" watchObservedRunningTime="2026-01-30 08:27:40.45131723 +0000 UTC m=+1099.146864339" Jan 30 08:27:41 crc kubenswrapper[4870]: I0130 08:27:41.431169 4870 generic.go:334] "Generic (PLEG): container finished" podID="eb61b735-bf9c-4bf5-a5cf-1948435af72e" containerID="a86a16c99cbecfe80af026afdf8bc6eec15eafc6658e69be7a63babe7a18aa00" exitCode=0 Jan 30 08:27:41 crc kubenswrapper[4870]: I0130 08:27:41.431272 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9d1f-account-create-update-mffzg" event={"ID":"eb61b735-bf9c-4bf5-a5cf-1948435af72e","Type":"ContainerDied","Data":"a86a16c99cbecfe80af026afdf8bc6eec15eafc6658e69be7a63babe7a18aa00"} Jan 30 08:27:41 crc kubenswrapper[4870]: I0130 08:27:41.435937 4870 generic.go:334] "Generic (PLEG): container finished" podID="b6566e49-850d-460e-9a22-9bfd7384f494" containerID="96c828944b59ded4cdb603b725476894266a7134b9d788fbea6f5b49b309942a" exitCode=0 Jan 30 08:27:41 crc kubenswrapper[4870]: I0130 08:27:41.435995 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6de9-account-create-update-nwcgl" event={"ID":"b6566e49-850d-460e-9a22-9bfd7384f494","Type":"ContainerDied","Data":"96c828944b59ded4cdb603b725476894266a7134b9d788fbea6f5b49b309942a"} Jan 30 08:27:41 crc kubenswrapper[4870]: I0130 08:27:41.437819 4870 generic.go:334] "Generic (PLEG): container finished" podID="051874aa-a01e-40bf-a987-a830886ea878" containerID="cb21d97629ca986697c0491abe322efbe6175a053c82fd76648eaa5b827fb2cc" exitCode=0 Jan 30 08:27:41 crc kubenswrapper[4870]: I0130 08:27:41.437898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xrsjh" event={"ID":"051874aa-a01e-40bf-a987-a830886ea878","Type":"ContainerDied","Data":"cb21d97629ca986697c0491abe322efbe6175a053c82fd76648eaa5b827fb2cc"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.147977 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8td6r" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.184240 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.191411 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.226719 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.234012 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.241795 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320238 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19155d05-01da-4e21-96c2-f23662f8f785-operator-scripts\") pod \"19155d05-01da-4e21-96c2-f23662f8f785\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320323 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5894m\" (UniqueName: \"kubernetes.io/projected/eb61b735-bf9c-4bf5-a5cf-1948435af72e-kube-api-access-5894m\") pod \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320371 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d78lm\" (UniqueName: \"kubernetes.io/projected/19155d05-01da-4e21-96c2-f23662f8f785-kube-api-access-d78lm\") pod \"19155d05-01da-4e21-96c2-f23662f8f785\" (UID: \"19155d05-01da-4e21-96c2-f23662f8f785\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320391 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htvx9\" (UniqueName: \"kubernetes.io/projected/17e1f740-4393-4ba2-8242-fb863196cb02-kube-api-access-htvx9\") pod \"17e1f740-4393-4ba2-8242-fb863196cb02\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320411 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc35112-b552-434a-b702-26c53cbf5574-operator-scripts\") pod \"dfc35112-b552-434a-b702-26c53cbf5574\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320429 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051874aa-a01e-40bf-a987-a830886ea878-operator-scripts\") pod \"051874aa-a01e-40bf-a987-a830886ea878\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320464 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb61b735-bf9c-4bf5-a5cf-1948435af72e-operator-scripts\") pod \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\" (UID: \"eb61b735-bf9c-4bf5-a5cf-1948435af72e\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320496 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17e1f740-4393-4ba2-8242-fb863196cb02-operator-scripts\") pod \"17e1f740-4393-4ba2-8242-fb863196cb02\" (UID: \"17e1f740-4393-4ba2-8242-fb863196cb02\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320557 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b42t4\" (UniqueName: \"kubernetes.io/projected/051874aa-a01e-40bf-a987-a830886ea878-kube-api-access-b42t4\") pod \"051874aa-a01e-40bf-a987-a830886ea878\" (UID: \"051874aa-a01e-40bf-a987-a830886ea878\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.320592 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f4zh\" (UniqueName: \"kubernetes.io/projected/dfc35112-b552-434a-b702-26c53cbf5574-kube-api-access-8f4zh\") pod \"dfc35112-b552-434a-b702-26c53cbf5574\" (UID: \"dfc35112-b552-434a-b702-26c53cbf5574\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.321942 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc35112-b552-434a-b702-26c53cbf5574-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfc35112-b552-434a-b702-26c53cbf5574" (UID: "dfc35112-b552-434a-b702-26c53cbf5574"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.321939 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19155d05-01da-4e21-96c2-f23662f8f785-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19155d05-01da-4e21-96c2-f23662f8f785" (UID: "19155d05-01da-4e21-96c2-f23662f8f785"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.321950 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051874aa-a01e-40bf-a987-a830886ea878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "051874aa-a01e-40bf-a987-a830886ea878" (UID: "051874aa-a01e-40bf-a987-a830886ea878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.322504 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb61b735-bf9c-4bf5-a5cf-1948435af72e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb61b735-bf9c-4bf5-a5cf-1948435af72e" (UID: "eb61b735-bf9c-4bf5-a5cf-1948435af72e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.323155 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e1f740-4393-4ba2-8242-fb863196cb02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17e1f740-4393-4ba2-8242-fb863196cb02" (UID: "17e1f740-4393-4ba2-8242-fb863196cb02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.327757 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051874aa-a01e-40bf-a987-a830886ea878-kube-api-access-b42t4" (OuterVolumeSpecName: "kube-api-access-b42t4") pod "051874aa-a01e-40bf-a987-a830886ea878" (UID: "051874aa-a01e-40bf-a987-a830886ea878"). InnerVolumeSpecName "kube-api-access-b42t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.329248 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e1f740-4393-4ba2-8242-fb863196cb02-kube-api-access-htvx9" (OuterVolumeSpecName: "kube-api-access-htvx9") pod "17e1f740-4393-4ba2-8242-fb863196cb02" (UID: "17e1f740-4393-4ba2-8242-fb863196cb02"). InnerVolumeSpecName "kube-api-access-htvx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.329397 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb61b735-bf9c-4bf5-a5cf-1948435af72e-kube-api-access-5894m" (OuterVolumeSpecName: "kube-api-access-5894m") pod "eb61b735-bf9c-4bf5-a5cf-1948435af72e" (UID: "eb61b735-bf9c-4bf5-a5cf-1948435af72e"). InnerVolumeSpecName "kube-api-access-5894m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.329423 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc35112-b552-434a-b702-26c53cbf5574-kube-api-access-8f4zh" (OuterVolumeSpecName: "kube-api-access-8f4zh") pod "dfc35112-b552-434a-b702-26c53cbf5574" (UID: "dfc35112-b552-434a-b702-26c53cbf5574"). InnerVolumeSpecName "kube-api-access-8f4zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.329790 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19155d05-01da-4e21-96c2-f23662f8f785-kube-api-access-d78lm" (OuterVolumeSpecName: "kube-api-access-d78lm") pod "19155d05-01da-4e21-96c2-f23662f8f785" (UID: "19155d05-01da-4e21-96c2-f23662f8f785"). InnerVolumeSpecName "kube-api-access-d78lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.421685 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmq87\" (UniqueName: \"kubernetes.io/projected/b6566e49-850d-460e-9a22-9bfd7384f494-kube-api-access-nmq87\") pod \"b6566e49-850d-460e-9a22-9bfd7384f494\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.421951 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6566e49-850d-460e-9a22-9bfd7384f494-operator-scripts\") pod \"b6566e49-850d-460e-9a22-9bfd7384f494\" (UID: \"b6566e49-850d-460e-9a22-9bfd7384f494\") " Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422499 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6566e49-850d-460e-9a22-9bfd7384f494-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6566e49-850d-460e-9a22-9bfd7384f494" (UID: "b6566e49-850d-460e-9a22-9bfd7384f494"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422775 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b42t4\" (UniqueName: \"kubernetes.io/projected/051874aa-a01e-40bf-a987-a830886ea878-kube-api-access-b42t4\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422795 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f4zh\" (UniqueName: \"kubernetes.io/projected/dfc35112-b552-434a-b702-26c53cbf5574-kube-api-access-8f4zh\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422807 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19155d05-01da-4e21-96c2-f23662f8f785-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422816 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5894m\" (UniqueName: \"kubernetes.io/projected/eb61b735-bf9c-4bf5-a5cf-1948435af72e-kube-api-access-5894m\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422825 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d78lm\" (UniqueName: \"kubernetes.io/projected/19155d05-01da-4e21-96c2-f23662f8f785-kube-api-access-d78lm\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422834 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htvx9\" (UniqueName: \"kubernetes.io/projected/17e1f740-4393-4ba2-8242-fb863196cb02-kube-api-access-htvx9\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422842 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc35112-b552-434a-b702-26c53cbf5574-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422851 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6566e49-850d-460e-9a22-9bfd7384f494-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422859 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051874aa-a01e-40bf-a987-a830886ea878-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422868 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb61b735-bf9c-4bf5-a5cf-1948435af72e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.422893 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17e1f740-4393-4ba2-8242-fb863196cb02-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.426782 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6566e49-850d-460e-9a22-9bfd7384f494-kube-api-access-nmq87" (OuterVolumeSpecName: "kube-api-access-nmq87") pod "b6566e49-850d-460e-9a22-9bfd7384f494" (UID: "b6566e49-850d-460e-9a22-9bfd7384f494"). InnerVolumeSpecName "kube-api-access-nmq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.498375 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9d1f-account-create-update-mffzg" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.498383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9d1f-account-create-update-mffzg" event={"ID":"eb61b735-bf9c-4bf5-a5cf-1948435af72e","Type":"ContainerDied","Data":"784b44f85ae007ee5455ec0b1b0eefd1ddd8dc08aafa33e4bdb7deee1e8d44f2"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.500506 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="784b44f85ae007ee5455ec0b1b0eefd1ddd8dc08aafa33e4bdb7deee1e8d44f2" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.501353 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0515-account-create-update-rln5d" event={"ID":"19155d05-01da-4e21-96c2-f23662f8f785","Type":"ContainerDied","Data":"3133fff318af851e54e7933fe20396b64a992bab1d22e8aca788bbc77160af37"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.501396 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3133fff318af851e54e7933fe20396b64a992bab1d22e8aca788bbc77160af37" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.501411 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0515-account-create-update-rln5d" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.504241 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f6r68" event={"ID":"881527d5-776b-4639-9306-895d1e370abd","Type":"ContainerStarted","Data":"c7c08f4bc1bd775e569c12ce6f45113dd74be7d4b1436663db01b3cc4e31c119"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.508330 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6de9-account-create-update-nwcgl" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.508412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6de9-account-create-update-nwcgl" event={"ID":"b6566e49-850d-460e-9a22-9bfd7384f494","Type":"ContainerDied","Data":"ffc742f6a81620434ff82836c4c9e4cd30220b47560730abe3ffaf861348054f"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.508484 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc742f6a81620434ff82836c4c9e4cd30220b47560730abe3ffaf861348054f" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.514942 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xrsjh" event={"ID":"051874aa-a01e-40bf-a987-a830886ea878","Type":"ContainerDied","Data":"6af75c88e6c19044721841617ae2fbc93efc4be1bd4b334456132a0a6bec8e0f"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.515015 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6af75c88e6c19044721841617ae2fbc93efc4be1bd4b334456132a0a6bec8e0f" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.515148 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xrsjh" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.523795 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gbfzh" event={"ID":"e8637667-8b7e-455e-8ba9-b6291574e4ce","Type":"ContainerStarted","Data":"0e37f5a9ce757405b6d3e4a3c9aee5ca81c0dd18541f09603a2ca8623a81a084"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.527229 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmq87\" (UniqueName: \"kubernetes.io/projected/b6566e49-850d-460e-9a22-9bfd7384f494-kube-api-access-nmq87\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.530803 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-f6r68" podStartSLOduration=1.816888226 podStartE2EDuration="11.530782351s" podCreationTimestamp="2026-01-30 08:27:36 +0000 UTC" firstStartedPulling="2026-01-30 08:27:37.189218014 +0000 UTC m=+1095.884765123" lastFinishedPulling="2026-01-30 08:27:46.903112129 +0000 UTC m=+1105.598659248" observedRunningTime="2026-01-30 08:27:47.522342868 +0000 UTC m=+1106.217890017" watchObservedRunningTime="2026-01-30 08:27:47.530782351 +0000 UTC m=+1106.226329470" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.532005 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8td6r" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.532126 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8td6r" event={"ID":"dfc35112-b552-434a-b702-26c53cbf5574","Type":"ContainerDied","Data":"d42477892b3ddaabcfbb181315d9bfde068e17d9e265081ee29d48545444f115"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.532180 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42477892b3ddaabcfbb181315d9bfde068e17d9e265081ee29d48545444f115" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.536130 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-937e-account-create-update-6w49r" event={"ID":"17e1f740-4393-4ba2-8242-fb863196cb02","Type":"ContainerDied","Data":"b9684209c89c1359375b31da44bbfc4622187e78936d27a08d572996df752ae7"} Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.536167 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9684209c89c1359375b31da44bbfc4622187e78936d27a08d572996df752ae7" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.536228 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-937e-account-create-update-6w49r" Jan 30 08:27:47 crc kubenswrapper[4870]: I0130 08:27:47.556576 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-gbfzh" podStartSLOduration=2.109261327 podStartE2EDuration="9.556556132s" podCreationTimestamp="2026-01-30 08:27:38 +0000 UTC" firstStartedPulling="2026-01-30 08:27:39.487677774 +0000 UTC m=+1098.183224883" lastFinishedPulling="2026-01-30 08:27:46.934872366 +0000 UTC m=+1105.630519688" observedRunningTime="2026-01-30 08:27:47.548591115 +0000 UTC m=+1106.244138244" watchObservedRunningTime="2026-01-30 08:27:47.556556132 +0000 UTC m=+1106.252103241" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896121 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tssp8"] Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896792 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e1f740-4393-4ba2-8242-fb863196cb02" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896808 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e1f740-4393-4ba2-8242-fb863196cb02" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896826 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f46507-531f-4d06-86d9-6c07a50abc6d" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896834 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f46507-531f-4d06-86d9-6c07a50abc6d" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896845 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc35112-b552-434a-b702-26c53cbf5574" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896856 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc35112-b552-434a-b702-26c53cbf5574" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896868 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb61b735-bf9c-4bf5-a5cf-1948435af72e" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896894 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb61b735-bf9c-4bf5-a5cf-1948435af72e" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896908 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6566e49-850d-460e-9a22-9bfd7384f494" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896916 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6566e49-850d-460e-9a22-9bfd7384f494" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896932 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19155d05-01da-4e21-96c2-f23662f8f785" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896940 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="19155d05-01da-4e21-96c2-f23662f8f785" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896968 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051874aa-a01e-40bf-a987-a830886ea878" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896975 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="051874aa-a01e-40bf-a987-a830886ea878" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: E0130 08:27:48.896989 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d425622-da05-4988-a059-013c06b4ecf1" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.896997 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d425622-da05-4988-a059-013c06b4ecf1" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897209 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e1f740-4393-4ba2-8242-fb863196cb02" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897242 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb61b735-bf9c-4bf5-a5cf-1948435af72e" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897267 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="051874aa-a01e-40bf-a987-a830886ea878" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897288 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f46507-531f-4d06-86d9-6c07a50abc6d" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897297 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="19155d05-01da-4e21-96c2-f23662f8f785" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897316 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc35112-b552-434a-b702-26c53cbf5574" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897340 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d425622-da05-4988-a059-013c06b4ecf1" containerName="mariadb-database-create" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.897353 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6566e49-850d-460e-9a22-9bfd7384f494" containerName="mariadb-account-create-update" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.898031 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.901272 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.901431 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-58ht6" Jan 30 08:27:48 crc kubenswrapper[4870]: I0130 08:27:48.906065 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tssp8"] Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.054160 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-config-data\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.054254 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8n8\" (UniqueName: \"kubernetes.io/projected/edd09a42-14b6-4161-ba2a-82c4cf4f5983-kube-api-access-7n8n8\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.054423 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-combined-ca-bundle\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.054469 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-db-sync-config-data\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.155604 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8n8\" (UniqueName: \"kubernetes.io/projected/edd09a42-14b6-4161-ba2a-82c4cf4f5983-kube-api-access-7n8n8\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.155784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-combined-ca-bundle\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.155852 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-db-sync-config-data\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.155922 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-config-data\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.160027 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-config-data\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.161329 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-combined-ca-bundle\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.161858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-db-sync-config-data\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.184420 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8n8\" (UniqueName: \"kubernetes.io/projected/edd09a42-14b6-4161-ba2a-82c4cf4f5983-kube-api-access-7n8n8\") pod \"glance-db-sync-tssp8\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:49 crc kubenswrapper[4870]: I0130 08:27:49.253778 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tssp8" Jan 30 08:27:50 crc kubenswrapper[4870]: I0130 08:27:50.277717 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tssp8"] Jan 30 08:27:50 crc kubenswrapper[4870]: I0130 08:27:50.562914 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tssp8" event={"ID":"edd09a42-14b6-4161-ba2a-82c4cf4f5983","Type":"ContainerStarted","Data":"8bfad17f6d235c11635a3d5c597e4e8cad4341b4b72906e827a01ca540cffaac"} Jan 30 08:27:51 crc kubenswrapper[4870]: I0130 08:27:51.573104 4870 generic.go:334] "Generic (PLEG): container finished" podID="e8637667-8b7e-455e-8ba9-b6291574e4ce" containerID="0e37f5a9ce757405b6d3e4a3c9aee5ca81c0dd18541f09603a2ca8623a81a084" exitCode=0 Jan 30 08:27:51 crc kubenswrapper[4870]: I0130 08:27:51.573206 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gbfzh" event={"ID":"e8637667-8b7e-455e-8ba9-b6291574e4ce","Type":"ContainerDied","Data":"0e37f5a9ce757405b6d3e4a3c9aee5ca81c0dd18541f09603a2ca8623a81a084"} Jan 30 08:27:52 crc kubenswrapper[4870]: I0130 08:27:52.584477 4870 generic.go:334] "Generic (PLEG): container finished" podID="881527d5-776b-4639-9306-895d1e370abd" containerID="c7c08f4bc1bd775e569c12ce6f45113dd74be7d4b1436663db01b3cc4e31c119" exitCode=0 Jan 30 08:27:52 crc kubenswrapper[4870]: I0130 08:27:52.584730 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f6r68" event={"ID":"881527d5-776b-4639-9306-895d1e370abd","Type":"ContainerDied","Data":"c7c08f4bc1bd775e569c12ce6f45113dd74be7d4b1436663db01b3cc4e31c119"} Jan 30 08:27:52 crc kubenswrapper[4870]: I0130 08:27:52.960597 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.018353 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-db-sync-config-data\") pod \"e8637667-8b7e-455e-8ba9-b6291574e4ce\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.019053 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-combined-ca-bundle\") pod \"e8637667-8b7e-455e-8ba9-b6291574e4ce\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.019094 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfbn2\" (UniqueName: \"kubernetes.io/projected/e8637667-8b7e-455e-8ba9-b6291574e4ce-kube-api-access-xfbn2\") pod \"e8637667-8b7e-455e-8ba9-b6291574e4ce\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.019201 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-config-data\") pod \"e8637667-8b7e-455e-8ba9-b6291574e4ce\" (UID: \"e8637667-8b7e-455e-8ba9-b6291574e4ce\") " Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.027072 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e8637667-8b7e-455e-8ba9-b6291574e4ce" (UID: "e8637667-8b7e-455e-8ba9-b6291574e4ce"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.028133 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8637667-8b7e-455e-8ba9-b6291574e4ce-kube-api-access-xfbn2" (OuterVolumeSpecName: "kube-api-access-xfbn2") pod "e8637667-8b7e-455e-8ba9-b6291574e4ce" (UID: "e8637667-8b7e-455e-8ba9-b6291574e4ce"). InnerVolumeSpecName "kube-api-access-xfbn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.062724 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8637667-8b7e-455e-8ba9-b6291574e4ce" (UID: "e8637667-8b7e-455e-8ba9-b6291574e4ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.082522 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-config-data" (OuterVolumeSpecName: "config-data") pod "e8637667-8b7e-455e-8ba9-b6291574e4ce" (UID: "e8637667-8b7e-455e-8ba9-b6291574e4ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.121694 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.121737 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfbn2\" (UniqueName: \"kubernetes.io/projected/e8637667-8b7e-455e-8ba9-b6291574e4ce-kube-api-access-xfbn2\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.121752 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.121766 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8637667-8b7e-455e-8ba9-b6291574e4ce-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.596144 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gbfzh" event={"ID":"e8637667-8b7e-455e-8ba9-b6291574e4ce","Type":"ContainerDied","Data":"25fc46aae9550fd891b36bb65004fc5f285237547119046fa6127a16efed7dfd"} Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.596197 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fc46aae9550fd891b36bb65004fc5f285237547119046fa6127a16efed7dfd" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.596216 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gbfzh" Jan 30 08:27:53 crc kubenswrapper[4870]: I0130 08:27:53.961655 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.037779 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-config-data\") pod \"881527d5-776b-4639-9306-895d1e370abd\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.037841 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknsc\" (UniqueName: \"kubernetes.io/projected/881527d5-776b-4639-9306-895d1e370abd-kube-api-access-wknsc\") pod \"881527d5-776b-4639-9306-895d1e370abd\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.037932 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-combined-ca-bundle\") pod \"881527d5-776b-4639-9306-895d1e370abd\" (UID: \"881527d5-776b-4639-9306-895d1e370abd\") " Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.044020 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881527d5-776b-4639-9306-895d1e370abd-kube-api-access-wknsc" (OuterVolumeSpecName: "kube-api-access-wknsc") pod "881527d5-776b-4639-9306-895d1e370abd" (UID: "881527d5-776b-4639-9306-895d1e370abd"). InnerVolumeSpecName "kube-api-access-wknsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.062015 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "881527d5-776b-4639-9306-895d1e370abd" (UID: "881527d5-776b-4639-9306-895d1e370abd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.084961 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-config-data" (OuterVolumeSpecName: "config-data") pod "881527d5-776b-4639-9306-895d1e370abd" (UID: "881527d5-776b-4639-9306-895d1e370abd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.140338 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.140374 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881527d5-776b-4639-9306-895d1e370abd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.140384 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknsc\" (UniqueName: \"kubernetes.io/projected/881527d5-776b-4639-9306-895d1e370abd-kube-api-access-wknsc\") on node \"crc\" DevicePath \"\"" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.615141 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f6r68" event={"ID":"881527d5-776b-4639-9306-895d1e370abd","Type":"ContainerDied","Data":"a31484a12f791686023b40843494f6f2ece996f4e78a4c51e74f6a74ad7d512e"} Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.615217 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31484a12f791686023b40843494f6f2ece996f4e78a4c51e74f6a74ad7d512e" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.617743 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f6r68" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.859645 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-294j8"] Jan 30 08:27:54 crc kubenswrapper[4870]: E0130 08:27:54.860086 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881527d5-776b-4639-9306-895d1e370abd" containerName="keystone-db-sync" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.860100 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="881527d5-776b-4639-9306-895d1e370abd" containerName="keystone-db-sync" Jan 30 08:27:54 crc kubenswrapper[4870]: E0130 08:27:54.860115 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8637667-8b7e-455e-8ba9-b6291574e4ce" containerName="watcher-db-sync" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.860121 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8637667-8b7e-455e-8ba9-b6291574e4ce" containerName="watcher-db-sync" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.860313 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8637667-8b7e-455e-8ba9-b6291574e4ce" containerName="watcher-db-sync" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.860326 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="881527d5-776b-4639-9306-895d1e370abd" containerName="keystone-db-sync" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.861213 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.882222 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-294j8"] Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.892757 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vd7q8"] Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.898305 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.904143 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vd7q8"] Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.905058 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vn7b5" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.905437 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.906555 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.906766 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.906868 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.979593 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-scripts\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.980214 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nf46\" (UniqueName: \"kubernetes.io/projected/38224a9d-ced6-4f76-8117-18e7ca7f33e7-kube-api-access-2nf46\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.980317 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.980395 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-fernet-keys\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.980563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-config\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.980701 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.981045 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8jg\" (UniqueName: \"kubernetes.io/projected/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-kube-api-access-8s8jg\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.981138 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-svc\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.981231 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.981507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-credential-keys\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.981587 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-config-data\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:54 crc kubenswrapper[4870]: I0130 08:27:54.981663 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-combined-ca-bundle\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.050609 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.052155 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.061543 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.075486 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-b9kpk" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085091 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-credential-keys\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085289 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-config-data\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085359 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-combined-ca-bundle\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085443 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-scripts\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085508 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nf46\" (UniqueName: \"kubernetes.io/projected/38224a9d-ced6-4f76-8117-18e7ca7f33e7-kube-api-access-2nf46\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085586 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.085654 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-fernet-keys\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.086421 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-config\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.086496 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.086575 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8jg\" (UniqueName: \"kubernetes.io/projected/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-kube-api-access-8s8jg\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.086661 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-svc\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.086741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.087665 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.088020 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-config\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.088610 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.088858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.089597 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-svc\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.096829 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-fernet-keys\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.103349 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.114832 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-credential-keys\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.115039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-combined-ca-bundle\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.125480 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-config-data\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.133179 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.148247 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.150156 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nf46\" (UniqueName: \"kubernetes.io/projected/38224a9d-ced6-4f76-8117-18e7ca7f33e7-kube-api-access-2nf46\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.151773 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.157451 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.163707 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8jg\" (UniqueName: \"kubernetes.io/projected/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-kube-api-access-8s8jg\") pod \"dnsmasq-dns-5bb457dfc5-294j8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.165099 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-scripts\") pod \"keystone-bootstrap-vd7q8\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.190145 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqd86\" (UniqueName: \"kubernetes.io/projected/d501bb9c-d88d-4362-a48e-4d0347ecc90e-kube-api-access-gqd86\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.190184 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d501bb9c-d88d-4362-a48e-4d0347ecc90e-logs\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.190220 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-config-data\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.190282 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.191509 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.193358 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.198511 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.202820 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.207982 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.222508 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d56cb75f7-5b6cr"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.223984 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.235429 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.240054 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.240097 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.240061 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-brkzs" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.240290 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.249460 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.249514 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.266625 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9g27p"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.267762 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.289667 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.289839 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.289961 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4blb4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gsr9\" (UniqueName: \"kubernetes.io/projected/3be4280c-f244-49ee-8731-bf39ac51ee1e-kube-api-access-2gsr9\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291440 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-config-data\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291464 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be4280c-f244-49ee-8731-bf39ac51ee1e-logs\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291483 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547994a2-f3d5-4ac9-a025-2644e86fe00d-logs\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291501 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291517 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-config-data\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291538 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291558 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-scripts\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291573 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-config-data\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291648 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gl4q\" (UniqueName: \"kubernetes.io/projected/547994a2-f3d5-4ac9-a025-2644e86fe00d-kube-api-access-7gl4q\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291670 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/547994a2-f3d5-4ac9-a025-2644e86fe00d-horizon-secret-key\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291708 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-logs\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291725 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sskqz\" (UniqueName: \"kubernetes.io/projected/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-kube-api-access-sskqz\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291747 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291769 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291792 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqd86\" (UniqueName: \"kubernetes.io/projected/d501bb9c-d88d-4362-a48e-4d0347ecc90e-kube-api-access-gqd86\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.291809 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d501bb9c-d88d-4362-a48e-4d0347ecc90e-logs\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.292190 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d501bb9c-d88d-4362-a48e-4d0347ecc90e-logs\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.302837 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.310940 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d56cb75f7-5b6cr"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.311496 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-config-data\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.322676 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqd86\" (UniqueName: \"kubernetes.io/projected/d501bb9c-d88d-4362-a48e-4d0347ecc90e-kube-api-access-gqd86\") pod \"watcher-applier-0\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.357013 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9g27p"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.387252 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9mjj4"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.388789 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.397481 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.397600 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.397774 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nnfmm" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.409799 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685bde78-dea1-4864-a825-af176178bd11-etc-machine-id\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.409844 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-logs\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.409897 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sskqz\" (UniqueName: \"kubernetes.io/projected/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-kube-api-access-sskqz\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.409946 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.409976 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-db-sync-config-data\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410001 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410040 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgl2s\" (UniqueName: \"kubernetes.io/projected/685bde78-dea1-4864-a825-af176178bd11-kube-api-access-lgl2s\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gsr9\" (UniqueName: \"kubernetes.io/projected/3be4280c-f244-49ee-8731-bf39ac51ee1e-kube-api-access-2gsr9\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410106 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be4280c-f244-49ee-8731-bf39ac51ee1e-logs\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410122 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-config-data\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410146 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547994a2-f3d5-4ac9-a025-2644e86fe00d-logs\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410163 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-config-data\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410177 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410200 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-scripts\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410223 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-scripts\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410275 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-config-data\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410313 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-combined-ca-bundle\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410343 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410399 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gl4q\" (UniqueName: \"kubernetes.io/projected/547994a2-f3d5-4ac9-a025-2644e86fe00d-kube-api-access-7gl4q\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.410429 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/547994a2-f3d5-4ac9-a025-2644e86fe00d-horizon-secret-key\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.411300 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-logs\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.412233 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547994a2-f3d5-4ac9-a025-2644e86fe00d-logs\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.412248 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be4280c-f244-49ee-8731-bf39ac51ee1e-logs\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.412715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-scripts\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.413314 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-config-data\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.416556 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.418194 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-config-data\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.419472 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.419540 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.420362 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.435376 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.437049 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.437772 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.438043 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.438263 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.439227 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/547994a2-f3d5-4ac9-a025-2644e86fe00d-horizon-secret-key\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.449207 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sskqz\" (UniqueName: \"kubernetes.io/projected/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-kube-api-access-sskqz\") pod \"watcher-decision-engine-0\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.453585 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.453661 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9mjj4"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.467289 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gl4q\" (UniqueName: \"kubernetes.io/projected/547994a2-f3d5-4ac9-a025-2644e86fe00d-kube-api-access-7gl4q\") pod \"horizon-6d56cb75f7-5b6cr\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.467568 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gsr9\" (UniqueName: \"kubernetes.io/projected/3be4280c-f244-49ee-8731-bf39ac51ee1e-kube-api-access-2gsr9\") pod \"watcher-api-0\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.470163 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-d2mx7"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.471264 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.474829 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qmdf5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.475100 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.481965 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-294j8"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.490099 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-b57k5"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.491708 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.495180 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.495986 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.497532 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-skpxp" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.512449 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.512529 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-config-data\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.513844 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-scripts\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.513900 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-run-httpd\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.513919 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-log-httpd\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.513938 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwsc\" (UniqueName: \"kubernetes.io/projected/505df376-c8bc-44ce-9c14-8cf94730c550-kube-api-access-7rwsc\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514001 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-combined-ca-bundle\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514040 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-config\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514066 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdb9v\" (UniqueName: \"kubernetes.io/projected/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-kube-api-access-gdb9v\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514142 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-config-data\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514272 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685bde78-dea1-4864-a825-af176178bd11-etc-machine-id\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514296 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-combined-ca-bundle\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514419 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-scripts\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514454 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-db-sync-config-data\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.514509 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgl2s\" (UniqueName: \"kubernetes.io/projected/685bde78-dea1-4864-a825-af176178bd11-kube-api-access-lgl2s\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.516145 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685bde78-dea1-4864-a825-af176178bd11-etc-machine-id\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.518563 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-scripts\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.518974 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-combined-ca-bundle\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.519263 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.519581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-db-sync-config-data\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.520970 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d2mx7"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.524180 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-config-data\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.540713 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgl2s\" (UniqueName: \"kubernetes.io/projected/685bde78-dea1-4864-a825-af176178bd11-kube-api-access-lgl2s\") pod \"cinder-db-sync-9g27p\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.555133 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b57k5"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.570788 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-66fsw"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.575279 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.577597 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fb6b548c7-56kg5"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.579650 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.585145 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb6b548c7-56kg5"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.594513 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-66fsw"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615535 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-combined-ca-bundle\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615581 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-scripts\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615605 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmq48\" (UniqueName: \"kubernetes.io/projected/c3bd649e-5c3c-495f-933f-3b516167cbd2-kube-api-access-bmq48\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615631 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-combined-ca-bundle\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615659 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-db-sync-config-data\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615682 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-logs\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615703 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615735 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-run-httpd\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615752 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwsc\" (UniqueName: \"kubernetes.io/projected/505df376-c8bc-44ce-9c14-8cf94730c550-kube-api-access-7rwsc\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615765 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-log-httpd\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615779 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-scripts\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615811 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pwt\" (UniqueName: \"kubernetes.io/projected/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-kube-api-access-n2pwt\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615831 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-config-data\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615848 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-config\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615864 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdb9v\" (UniqueName: \"kubernetes.io/projected/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-kube-api-access-gdb9v\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615939 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-combined-ca-bundle\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615959 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-config-data\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.615977 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.624845 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-combined-ca-bundle\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.626655 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-run-httpd\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.628850 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-log-httpd\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.633674 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-config\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.639769 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.649544 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.652416 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.653708 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-config-data\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.661213 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.666837 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-scripts\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.667859 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdb9v\" (UniqueName: \"kubernetes.io/projected/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-kube-api-access-gdb9v\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.668772 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9g27p" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.669533 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.676467 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwsc\" (UniqueName: \"kubernetes.io/projected/505df376-c8bc-44ce-9c14-8cf94730c550-kube-api-access-7rwsc\") pod \"neutron-db-sync-9mjj4\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727002 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-db-sync-config-data\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727269 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zw7\" (UniqueName: \"kubernetes.io/projected/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-kube-api-access-r7zw7\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727309 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-config-data\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727347 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-logs\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727402 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnppg\" (UniqueName: \"kubernetes.io/projected/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-kube-api-access-jnppg\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727464 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-scripts\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727535 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727577 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pwt\" (UniqueName: \"kubernetes.io/projected/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-kube-api-access-n2pwt\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727601 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-config-data\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727621 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-logs\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727662 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727685 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-horizon-secret-key\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727718 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-scripts\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727771 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-combined-ca-bundle\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727817 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-svc\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727860 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmq48\" (UniqueName: \"kubernetes.io/projected/c3bd649e-5c3c-495f-933f-3b516167cbd2-kube-api-access-bmq48\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727908 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-combined-ca-bundle\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.727933 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-config\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.731168 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-logs\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.732524 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-config-data\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.734384 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-combined-ca-bundle\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.735129 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-scripts\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.735782 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-db-sync-config-data\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.735951 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-combined-ca-bundle\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.752966 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.756995 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pwt\" (UniqueName: \"kubernetes.io/projected/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-kube-api-access-n2pwt\") pod \"placement-db-sync-b57k5\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.762485 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmq48\" (UniqueName: \"kubernetes.io/projected/c3bd649e-5c3c-495f-933f-3b516167cbd2-kube-api-access-bmq48\") pod \"barbican-db-sync-d2mx7\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.772175 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.799100 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.820157 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b57k5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.831464 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-svc\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.831558 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-config\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.831614 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zw7\" (UniqueName: \"kubernetes.io/projected/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-kube-api-access-r7zw7\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.832796 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-config-data\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.832904 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnppg\" (UniqueName: \"kubernetes.io/projected/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-kube-api-access-jnppg\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.832980 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833011 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833065 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-logs\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833098 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833120 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-horizon-secret-key\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833150 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-scripts\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833196 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-config\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.833600 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-logs\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.834283 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.834604 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-config-data\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.835822 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-scripts\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.838385 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-horizon-secret-key\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.842308 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-svc\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.843083 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.848214 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.852609 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnppg\" (UniqueName: \"kubernetes.io/projected/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-kube-api-access-jnppg\") pod \"horizon-5fb6b548c7-56kg5\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.857911 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zw7\" (UniqueName: \"kubernetes.io/projected/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-kube-api-access-r7zw7\") pod \"dnsmasq-dns-7fbb4d475f-66fsw\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.888592 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vd7q8"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.896381 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.897517 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-294j8"] Jan 30 08:27:55 crc kubenswrapper[4870]: I0130 08:27:55.915108 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:27:55 crc kubenswrapper[4870]: W0130 08:27:55.985269 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc98de4d_b882_4f13_bc7e_1e6070ffd7d8.slice/crio-0596995a387e00d4157105eb8a5a484304961f9369875c777b77a509e3229e35 WatchSource:0}: Error finding container 0596995a387e00d4157105eb8a5a484304961f9369875c777b77a509e3229e35: Status 404 returned error can't find the container with id 0596995a387e00d4157105eb8a5a484304961f9369875c777b77a509e3229e35 Jan 30 08:27:56 crc kubenswrapper[4870]: W0130 08:27:56.010950 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38224a9d_ced6_4f76_8117_18e7ca7f33e7.slice/crio-bf4fa656174794487020ac5e8e368cdaeecb6ab51935c7a52ce6d58beb8d1571 WatchSource:0}: Error finding container bf4fa656174794487020ac5e8e368cdaeecb6ab51935c7a52ce6d58beb8d1571: Status 404 returned error can't find the container with id bf4fa656174794487020ac5e8e368cdaeecb6ab51935c7a52ce6d58beb8d1571 Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.209035 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.293338 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:27:56 crc kubenswrapper[4870]: W0130 08:27:56.345475 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb0ea94_f1b1_41c4_a968_ff1d4af60e2f.slice/crio-688a235f4ea3f6827668ea12cec3c801c91b77e9c743386ea44f9613e759f338 WatchSource:0}: Error finding container 688a235f4ea3f6827668ea12cec3c801c91b77e9c743386ea44f9613e759f338: Status 404 returned error can't find the container with id 688a235f4ea3f6827668ea12cec3c801c91b77e9c743386ea44f9613e759f338 Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.584536 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.687416 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9mjj4"] Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.687704 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd7q8" event={"ID":"38224a9d-ced6-4f76-8117-18e7ca7f33e7","Type":"ContainerStarted","Data":"bf4fa656174794487020ac5e8e368cdaeecb6ab51935c7a52ce6d58beb8d1571"} Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.694094 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f","Type":"ContainerStarted","Data":"688a235f4ea3f6827668ea12cec3c801c91b77e9c743386ea44f9613e759f338"} Jan 30 08:27:56 crc kubenswrapper[4870]: W0130 08:27:56.696990 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod685bde78_dea1_4864_a825_af176178bd11.slice/crio-208d52cf2eb05b40a226e4a1738a5480e549f0f08e4a6ad6b02178c49de677f7 WatchSource:0}: Error finding container 208d52cf2eb05b40a226e4a1738a5480e549f0f08e4a6ad6b02178c49de677f7: Status 404 returned error can't find the container with id 208d52cf2eb05b40a226e4a1738a5480e549f0f08e4a6ad6b02178c49de677f7 Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.697261 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9g27p"] Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.698056 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d501bb9c-d88d-4362-a48e-4d0347ecc90e","Type":"ContainerStarted","Data":"d6051b0c5bd2d63d9f43ae131b460100c45ef28a76e95d9c82c1f29baab7429d"} Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.700425 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3be4280c-f244-49ee-8731-bf39ac51ee1e","Type":"ContainerStarted","Data":"ce3f73b3878e9503e479cb8deefa9a49d72c579fcd3b7d49136ba600b5e48a5d"} Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.702520 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" event={"ID":"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8","Type":"ContainerStarted","Data":"0596995a387e00d4157105eb8a5a484304961f9369875c777b77a509e3229e35"} Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.942180 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d56cb75f7-5b6cr"] Jan 30 08:27:56 crc kubenswrapper[4870]: W0130 08:27:56.945093 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fc2a1f3_54bc_4554_a413_69bc35b58a2f.slice/crio-7e31fa7de08f6fe5e038c395ec730d87de7703f0769687addc3ef103068cb495 WatchSource:0}: Error finding container 7e31fa7de08f6fe5e038c395ec730d87de7703f0769687addc3ef103068cb495: Status 404 returned error can't find the container with id 7e31fa7de08f6fe5e038c395ec730d87de7703f0769687addc3ef103068cb495 Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.965429 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-66fsw"] Jan 30 08:27:56 crc kubenswrapper[4870]: I0130 08:27:56.995367 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b57k5"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.034944 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:27:57 crc kubenswrapper[4870]: W0130 08:27:57.051255 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf6a0a0_7d14_4cbd_96e4_c81ac5366fb2.slice/crio-0a2dbcce6be2e5137bdbc1dec4f8f525f5301e8818ebf37e38952868cb263db6 WatchSource:0}: Error finding container 0a2dbcce6be2e5137bdbc1dec4f8f525f5301e8818ebf37e38952868cb263db6: Status 404 returned error can't find the container with id 0a2dbcce6be2e5137bdbc1dec4f8f525f5301e8818ebf37e38952868cb263db6 Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.053003 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb6b548c7-56kg5"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.060985 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d2mx7"] Jan 30 08:27:57 crc kubenswrapper[4870]: W0130 08:27:57.065099 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3bd649e_5c3c_495f_933f_3b516167cbd2.slice/crio-545706fcd20ca2bc6bb776e0ab7efebb3759d48b4ad6e3c5ac851eb2e476dd66 WatchSource:0}: Error finding container 545706fcd20ca2bc6bb776e0ab7efebb3759d48b4ad6e3c5ac851eb2e476dd66: Status 404 returned error can't find the container with id 545706fcd20ca2bc6bb776e0ab7efebb3759d48b4ad6e3c5ac851eb2e476dd66 Jan 30 08:27:57 crc kubenswrapper[4870]: W0130 08:27:57.065930 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe9b9169_ab54_46ee_acb5_d1dc0047e59c.slice/crio-c04be222da792bc780412525fe8f824421a705de698d7617b4310714f4dd2987 WatchSource:0}: Error finding container c04be222da792bc780412525fe8f824421a705de698d7617b4310714f4dd2987: Status 404 returned error can't find the container with id c04be222da792bc780412525fe8f824421a705de698d7617b4310714f4dd2987 Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.324110 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.338859 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fb6b548c7-56kg5"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.449937 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5949fbc84f-vdxjp"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.454246 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.477822 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5949fbc84f-vdxjp"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.492206 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.579415 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-config-data\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.579670 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj77z\" (UniqueName: \"kubernetes.io/projected/4171155c-1d8c-48a0-9675-1c730f9130dc-kube-api-access-xj77z\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.579735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-scripts\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.579779 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4171155c-1d8c-48a0-9675-1c730f9130dc-horizon-secret-key\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.579812 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4171155c-1d8c-48a0-9675-1c730f9130dc-logs\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.682305 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4171155c-1d8c-48a0-9675-1c730f9130dc-logs\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.682398 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-config-data\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.682430 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj77z\" (UniqueName: \"kubernetes.io/projected/4171155c-1d8c-48a0-9675-1c730f9130dc-kube-api-access-xj77z\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.682485 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-scripts\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.682527 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4171155c-1d8c-48a0-9675-1c730f9130dc-horizon-secret-key\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.682819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4171155c-1d8c-48a0-9675-1c730f9130dc-logs\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.683949 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-scripts\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.684094 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-config-data\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.689298 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4171155c-1d8c-48a0-9675-1c730f9130dc-horizon-secret-key\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.704182 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj77z\" (UniqueName: \"kubernetes.io/projected/4171155c-1d8c-48a0-9675-1c730f9130dc-kube-api-access-xj77z\") pod \"horizon-5949fbc84f-vdxjp\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.733760 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b57k5" event={"ID":"1435e0c6-e24a-44d4-bf78-3e5300e23cdd","Type":"ContainerStarted","Data":"09157ba4d27c88f05b3bcbf87559e2f7cd18de58cca7f08d086b19254b605ef0"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.737306 4870 generic.go:334] "Generic (PLEG): container finished" podID="fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" containerID="4dee564f51791741ba4e6cb761d557f18b1c1b23b27913388a9dbed54cdf4c9e" exitCode=0 Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.737391 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" event={"ID":"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8","Type":"ContainerDied","Data":"4dee564f51791741ba4e6cb761d557f18b1c1b23b27913388a9dbed54cdf4c9e"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.742937 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9g27p" event={"ID":"685bde78-dea1-4864-a825-af176178bd11","Type":"ContainerStarted","Data":"208d52cf2eb05b40a226e4a1738a5480e549f0f08e4a6ad6b02178c49de677f7"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.743980 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mjj4" event={"ID":"505df376-c8bc-44ce-9c14-8cf94730c550","Type":"ContainerStarted","Data":"51fd04d1413a7bb8dd1010fcf50ab478d7211a73c87542e70aaae3ce82cc9053"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.744003 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mjj4" event={"ID":"505df376-c8bc-44ce-9c14-8cf94730c550","Type":"ContainerStarted","Data":"171c5c07c9fb91c243425bc5e80be08d88ca8fe65555c57f93bf344f77f94faf"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.749136 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d56cb75f7-5b6cr" event={"ID":"547994a2-f3d5-4ac9-a025-2644e86fe00d","Type":"ContainerStarted","Data":"3856f7ed6f6f6e5768d9f69766b166a6b372b3a627f11ff19e5e79941f1444f8"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.751657 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3be4280c-f244-49ee-8731-bf39ac51ee1e","Type":"ContainerStarted","Data":"4613e64cbab9e5f1e4c63d6f62c12ca1aa12d4d56ce06a57b13b9b4dcc74559f"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.754611 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb6b548c7-56kg5" event={"ID":"fe9b9169-ab54-46ee-acb5-d1dc0047e59c","Type":"ContainerStarted","Data":"c04be222da792bc780412525fe8f824421a705de698d7617b4310714f4dd2987"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.755901 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2mx7" event={"ID":"c3bd649e-5c3c-495f-933f-3b516167cbd2","Type":"ContainerStarted","Data":"545706fcd20ca2bc6bb776e0ab7efebb3759d48b4ad6e3c5ac851eb2e476dd66"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.757223 4870 generic.go:334] "Generic (PLEG): container finished" podID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerID="38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c" exitCode=0 Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.757278 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" event={"ID":"7fc2a1f3-54bc-4554-a413-69bc35b58a2f","Type":"ContainerDied","Data":"38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.757297 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" event={"ID":"7fc2a1f3-54bc-4554-a413-69bc35b58a2f","Type":"ContainerStarted","Data":"7e31fa7de08f6fe5e038c395ec730d87de7703f0769687addc3ef103068cb495"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.767593 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerStarted","Data":"0a2dbcce6be2e5137bdbc1dec4f8f525f5301e8818ebf37e38952868cb263db6"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.769747 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9mjj4" podStartSLOduration=2.769733879 podStartE2EDuration="2.769733879s" podCreationTimestamp="2026-01-30 08:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:57.766433116 +0000 UTC m=+1116.461980225" watchObservedRunningTime="2026-01-30 08:27:57.769733879 +0000 UTC m=+1116.465280988" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.772668 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd7q8" event={"ID":"38224a9d-ced6-4f76-8117-18e7ca7f33e7","Type":"ContainerStarted","Data":"3cc3794f576037b7275283832f1fcd12d44b3421b4fb40fee74fe7e2b82882e4"} Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.814004 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:27:57 crc kubenswrapper[4870]: I0130 08:27:57.818387 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vd7q8" podStartSLOduration=3.818370551 podStartE2EDuration="3.818370551s" podCreationTimestamp="2026-01-30 08:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:27:57.809830536 +0000 UTC m=+1116.505377665" watchObservedRunningTime="2026-01-30 08:27:57.818370551 +0000 UTC m=+1116.513917650" Jan 30 08:28:03 crc kubenswrapper[4870]: I0130 08:28:03.843298 4870 generic.go:334] "Generic (PLEG): container finished" podID="38224a9d-ced6-4f76-8117-18e7ca7f33e7" containerID="3cc3794f576037b7275283832f1fcd12d44b3421b4fb40fee74fe7e2b82882e4" exitCode=0 Jan 30 08:28:03 crc kubenswrapper[4870]: I0130 08:28:03.843341 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd7q8" event={"ID":"38224a9d-ced6-4f76-8117-18e7ca7f33e7","Type":"ContainerDied","Data":"3cc3794f576037b7275283832f1fcd12d44b3421b4fb40fee74fe7e2b82882e4"} Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.305492 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d56cb75f7-5b6cr"] Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.384017 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74569d8966-5sjxs"] Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.385645 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.388584 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.393000 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74569d8966-5sjxs"] Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446315 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-config-data\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446393 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-secret-key\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446485 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-scripts\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1872a14d-aeff-46f7-8430-c6fe0eb6973b-logs\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446611 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-combined-ca-bundle\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446640 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-tls-certs\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.446666 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdz8p\" (UniqueName: \"kubernetes.io/projected/1872a14d-aeff-46f7-8430-c6fe0eb6973b-kube-api-access-fdz8p\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.456060 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5949fbc84f-vdxjp"] Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.488985 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-769d7654db-gw44c"] Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.490664 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.524037 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-769d7654db-gw44c"] Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548244 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-scripts\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548287 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1872a14d-aeff-46f7-8430-c6fe0eb6973b-logs\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548324 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-logs\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548367 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-horizon-tls-certs\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548397 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-scripts\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548427 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-config-data\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548444 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-combined-ca-bundle\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548466 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bg8\" (UniqueName: \"kubernetes.io/projected/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-kube-api-access-s8bg8\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548487 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-tls-certs\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548508 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdz8p\" (UniqueName: \"kubernetes.io/projected/1872a14d-aeff-46f7-8430-c6fe0eb6973b-kube-api-access-fdz8p\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548538 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-config-data\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-combined-ca-bundle\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548586 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-secret-key\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.548615 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-horizon-secret-key\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.550088 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-scripts\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.551243 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-config-data\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.551749 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1872a14d-aeff-46f7-8430-c6fe0eb6973b-logs\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.555411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-combined-ca-bundle\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.559889 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-tls-certs\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.560181 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-secret-key\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.571925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdz8p\" (UniqueName: \"kubernetes.io/projected/1872a14d-aeff-46f7-8430-c6fe0eb6973b-kube-api-access-fdz8p\") pod \"horizon-74569d8966-5sjxs\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.650555 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-scripts\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.651161 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-config-data\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.651296 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-scripts\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.651402 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8bg8\" (UniqueName: \"kubernetes.io/projected/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-kube-api-access-s8bg8\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.651593 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-combined-ca-bundle\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.651648 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-horizon-secret-key\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.652246 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-logs\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.652322 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-horizon-tls-certs\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.652555 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-logs\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.652860 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-config-data\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.656059 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-combined-ca-bundle\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.656504 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-horizon-tls-certs\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.661766 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-horizon-secret-key\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.683971 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8bg8\" (UniqueName: \"kubernetes.io/projected/b6c9337c-50ce-4c5c-a84f-8092d25fa1e2-kube-api-access-s8bg8\") pod \"horizon-769d7654db-gw44c\" (UID: \"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2\") " pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.710162 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:04 crc kubenswrapper[4870]: I0130 08:28:04.815381 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:12 crc kubenswrapper[4870]: E0130 08:28:12.192996 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 30 08:28:12 crc kubenswrapper[4870]: E0130 08:28:12.193505 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 30 08:28:12 crc kubenswrapper[4870]: E0130 08:28:12.193700 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n8n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-tssp8_openstack(edd09a42-14b6-4161-ba2a-82c4cf4f5983): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:28:12 crc kubenswrapper[4870]: E0130 08:28:12.195238 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-tssp8" podUID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" Jan 30 08:28:12 crc kubenswrapper[4870]: E0130 08:28:12.935293 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-tssp8" podUID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.388833 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.395409 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.555675 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nf46\" (UniqueName: \"kubernetes.io/projected/38224a9d-ced6-4f76-8117-18e7ca7f33e7-kube-api-access-2nf46\") pod \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.555728 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-sb\") pod \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.555845 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-svc\") pod \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.555932 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-credential-keys\") pod \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.555952 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-fernet-keys\") pod \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.555991 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-nb\") pod \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.556106 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-config\") pod \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.556181 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-swift-storage-0\") pod \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.556214 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-combined-ca-bundle\") pod \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.556245 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-config-data\") pod \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.556267 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-scripts\") pod \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\" (UID: \"38224a9d-ced6-4f76-8117-18e7ca7f33e7\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.556301 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s8jg\" (UniqueName: \"kubernetes.io/projected/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-kube-api-access-8s8jg\") pod \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\" (UID: \"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8\") " Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.562451 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "38224a9d-ced6-4f76-8117-18e7ca7f33e7" (UID: "38224a9d-ced6-4f76-8117-18e7ca7f33e7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.563650 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-kube-api-access-8s8jg" (OuterVolumeSpecName: "kube-api-access-8s8jg") pod "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" (UID: "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8"). InnerVolumeSpecName "kube-api-access-8s8jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.565565 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38224a9d-ced6-4f76-8117-18e7ca7f33e7-kube-api-access-2nf46" (OuterVolumeSpecName: "kube-api-access-2nf46") pod "38224a9d-ced6-4f76-8117-18e7ca7f33e7" (UID: "38224a9d-ced6-4f76-8117-18e7ca7f33e7"). InnerVolumeSpecName "kube-api-access-2nf46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.566061 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-scripts" (OuterVolumeSpecName: "scripts") pod "38224a9d-ced6-4f76-8117-18e7ca7f33e7" (UID: "38224a9d-ced6-4f76-8117-18e7ca7f33e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.566153 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "38224a9d-ced6-4f76-8117-18e7ca7f33e7" (UID: "38224a9d-ced6-4f76-8117-18e7ca7f33e7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.580802 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" (UID: "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.587158 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" (UID: "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.589104 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" (UID: "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.589938 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38224a9d-ced6-4f76-8117-18e7ca7f33e7" (UID: "38224a9d-ced6-4f76-8117-18e7ca7f33e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.593674 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" (UID: "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.600482 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-config" (OuterVolumeSpecName: "config") pod "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" (UID: "fc98de4d-b882-4f13-bc7e-1e6070ffd7d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.611106 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-config-data" (OuterVolumeSpecName: "config-data") pod "38224a9d-ced6-4f76-8117-18e7ca7f33e7" (UID: "38224a9d-ced6-4f76-8117-18e7ca7f33e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658430 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s8jg\" (UniqueName: \"kubernetes.io/projected/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-kube-api-access-8s8jg\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658458 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nf46\" (UniqueName: \"kubernetes.io/projected/38224a9d-ced6-4f76-8117-18e7ca7f33e7-kube-api-access-2nf46\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658469 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658479 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658488 4870 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658496 4870 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658504 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658512 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658519 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658527 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658536 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: I0130 08:28:24.658544 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38224a9d-ced6-4f76-8117-18e7ca7f33e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:24 crc kubenswrapper[4870]: E0130 08:28:24.757094 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest" Jan 30 08:28:24 crc kubenswrapper[4870]: E0130 08:28:24.757167 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest" Jan 30 08:28:24 crc kubenswrapper[4870]: E0130 08:28:24.757481 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-applier,Image:38.102.83.23:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h75h596h649h5fhd7hdfhc9h587h5f7h55bhd8hdbh5d4h5dfh567h5c7hcbh5b7h5b7h686h64ch95h5cfh557h654h568h54ch646hdh8bh9fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-applier-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqd86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-applier-0_openstack(d501bb9c-d88d-4362-a48e-4d0347ecc90e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:28:24 crc kubenswrapper[4870]: E0130 08:28:24.758999 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.037789 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" event={"ID":"fc98de4d-b882-4f13-bc7e-1e6070ffd7d8","Type":"ContainerDied","Data":"0596995a387e00d4157105eb8a5a484304961f9369875c777b77a509e3229e35"} Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.038107 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb457dfc5-294j8" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.038415 4870 scope.go:117] "RemoveContainer" containerID="4dee564f51791741ba4e6cb761d557f18b1c1b23b27913388a9dbed54cdf4c9e" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.042200 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd7q8" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.042989 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd7q8" event={"ID":"38224a9d-ced6-4f76-8117-18e7ca7f33e7","Type":"ContainerDied","Data":"bf4fa656174794487020ac5e8e368cdaeecb6ab51935c7a52ce6d58beb8d1571"} Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.043024 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf4fa656174794487020ac5e8e368cdaeecb6ab51935c7a52ce6d58beb8d1571" Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.044964 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest\\\"\"" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.106354 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-294j8"] Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.113522 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bb457dfc5-294j8"] Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.254425 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.254473 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.492356 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vd7q8"] Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.500859 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vd7q8"] Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.600806 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g4m9m"] Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.601248 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" containerName="init" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.601269 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" containerName="init" Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.601300 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38224a9d-ced6-4f76-8117-18e7ca7f33e7" containerName="keystone-bootstrap" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.601306 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="38224a9d-ced6-4f76-8117-18e7ca7f33e7" containerName="keystone-bootstrap" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.601463 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" containerName="init" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.601496 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="38224a9d-ced6-4f76-8117-18e7ca7f33e7" containerName="keystone-bootstrap" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.602099 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.605110 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.605326 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.605563 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.605766 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.605947 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vn7b5" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.612014 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g4m9m"] Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.677940 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-config-data\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.678195 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-credential-keys\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.678258 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-combined-ca-bundle\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.678289 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-scripts\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.678541 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-fernet-keys\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.678620 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66xr\" (UniqueName: \"kubernetes.io/projected/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-kube-api-access-t66xr\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.782064 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-credential-keys\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.782144 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-combined-ca-bundle\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.782171 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-scripts\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.782233 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-fernet-keys\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.782264 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66xr\" (UniqueName: \"kubernetes.io/projected/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-kube-api-access-t66xr\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.782431 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-config-data\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.786670 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-scripts\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.788488 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-fernet-keys\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.788700 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-config-data\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.792389 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-credential-keys\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.793043 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-combined-ca-bundle\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.803506 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66xr\" (UniqueName: \"kubernetes.io/projected/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-kube-api-access-t66xr\") pod \"keystone-bootstrap-g4m9m\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: I0130 08:28:25.924059 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.983042 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.983107 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.983272 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgl2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9g27p_openstack(685bde78-dea1-4864-a825-af176178bd11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:28:25 crc kubenswrapper[4870]: E0130 08:28:25.984677 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9g27p" podUID="685bde78-dea1-4864-a825-af176178bd11" Jan 30 08:28:26 crc kubenswrapper[4870]: E0130 08:28:26.075300 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-9g27p" podUID="685bde78-dea1-4864-a825-af176178bd11" Jan 30 08:28:26 crc kubenswrapper[4870]: I0130 08:28:26.092547 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38224a9d-ced6-4f76-8117-18e7ca7f33e7" path="/var/lib/kubelet/pods/38224a9d-ced6-4f76-8117-18e7ca7f33e7/volumes" Jan 30 08:28:26 crc kubenswrapper[4870]: I0130 08:28:26.093169 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc98de4d-b882-4f13-bc7e-1e6070ffd7d8" path="/var/lib/kubelet/pods/fc98de4d-b882-4f13-bc7e-1e6070ffd7d8/volumes" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.272263 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4: Get \"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-barbican-api/blobs/sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4\": context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.272688 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4: Get \"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-barbican-api/blobs/sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4\": context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.272806 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.23:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmq48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-d2mx7_openstack(c3bd649e-5c3c-495f-933f-3b516167cbd2): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4: Get \"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-barbican-api/blobs/sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4\": context canceled" logger="UnhandledError" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277475 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277508 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277602 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64dh689h676h7fh5bbh58fhb7h574h6bh58fh7dh579hdh5bh669h67fhfh5bfh559hch6bh95h567h597hc9h5fbhc9h98h7dh5d4h84h655q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gl4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d56cb75f7-5b6cr_openstack(547994a2-f3d5-4ac9-a025-2644e86fe00d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277648 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0: Get \"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0\": context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277677 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0: Get \"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0\": context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277754 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.23:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h86h65bhdbh5cdh5dfh97h656h68bh548hch567h5b4h685hf7h567h5f8h67dh8dh68h657h75h699h5d9h696hb4h588h99h545hf6h694h67q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdb9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0: Get \"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0\": context canceled" logger="UnhandledError" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.277802 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4: Get \\\"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-barbican-api/blobs/sha256:9297fe5be2ac1cc1fd34b411e74e2fd0c8cfdcfffd9039224ac97aa7f09437b4\\\": context canceled\"" pod="openstack/barbican-db-sync-d2mx7" podUID="c3bd649e-5c3c-495f-933f-3b516167cbd2" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.290799 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6d56cb75f7-5b6cr" podUID="547994a2-f3d5-4ac9-a025-2644e86fe00d" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.321138 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.321462 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.321584 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57bhdch68hbdh584h658h559hf5h8bh689h564hfbh65h6bh5c7hcdh6ch6fh57h584h667h5fbh586hddh56bh694hbdh6hb6hbchc6h5fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnppg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5fb6b548c7-56kg5_openstack(fe9b9169-ab54-46ee-acb5-d1dc0047e59c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:28:32 crc kubenswrapper[4870]: E0130 08:28:32.324449 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5fb6b548c7-56kg5" podUID="fe9b9169-ab54-46ee-acb5-d1dc0047e59c" Jan 30 08:28:33 crc kubenswrapper[4870]: E0130 08:28:33.136422 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-d2mx7" podUID="c3bd649e-5c3c-495f-933f-3b516167cbd2" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.167554 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb6b548c7-56kg5" event={"ID":"fe9b9169-ab54-46ee-acb5-d1dc0047e59c","Type":"ContainerDied","Data":"c04be222da792bc780412525fe8f824421a705de698d7617b4310714f4dd2987"} Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.168465 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c04be222da792bc780412525fe8f824421a705de698d7617b4310714f4dd2987" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.171824 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d56cb75f7-5b6cr" event={"ID":"547994a2-f3d5-4ac9-a025-2644e86fe00d","Type":"ContainerDied","Data":"3856f7ed6f6f6e5768d9f69766b166a6b372b3a627f11ff19e5e79941f1444f8"} Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.172002 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3856f7ed6f6f6e5768d9f69766b166a6b372b3a627f11ff19e5e79941f1444f8" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.184798 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.219484 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362456 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gl4q\" (UniqueName: \"kubernetes.io/projected/547994a2-f3d5-4ac9-a025-2644e86fe00d-kube-api-access-7gl4q\") pod \"547994a2-f3d5-4ac9-a025-2644e86fe00d\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362519 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-horizon-secret-key\") pod \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362559 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-config-data\") pod \"547994a2-f3d5-4ac9-a025-2644e86fe00d\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362577 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-scripts\") pod \"547994a2-f3d5-4ac9-a025-2644e86fe00d\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362657 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-config-data\") pod \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362709 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/547994a2-f3d5-4ac9-a025-2644e86fe00d-horizon-secret-key\") pod \"547994a2-f3d5-4ac9-a025-2644e86fe00d\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362746 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnppg\" (UniqueName: \"kubernetes.io/projected/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-kube-api-access-jnppg\") pod \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362773 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-logs\") pod \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362802 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-scripts\") pod \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\" (UID: \"fe9b9169-ab54-46ee-acb5-d1dc0047e59c\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.362854 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547994a2-f3d5-4ac9-a025-2644e86fe00d-logs\") pod \"547994a2-f3d5-4ac9-a025-2644e86fe00d\" (UID: \"547994a2-f3d5-4ac9-a025-2644e86fe00d\") " Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.363167 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-config-data" (OuterVolumeSpecName: "config-data") pod "547994a2-f3d5-4ac9-a025-2644e86fe00d" (UID: "547994a2-f3d5-4ac9-a025-2644e86fe00d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.364360 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-scripts" (OuterVolumeSpecName: "scripts") pod "547994a2-f3d5-4ac9-a025-2644e86fe00d" (UID: "547994a2-f3d5-4ac9-a025-2644e86fe00d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.364784 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-scripts" (OuterVolumeSpecName: "scripts") pod "fe9b9169-ab54-46ee-acb5-d1dc0047e59c" (UID: "fe9b9169-ab54-46ee-acb5-d1dc0047e59c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.365087 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-logs" (OuterVolumeSpecName: "logs") pod "fe9b9169-ab54-46ee-acb5-d1dc0047e59c" (UID: "fe9b9169-ab54-46ee-acb5-d1dc0047e59c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.365272 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/547994a2-f3d5-4ac9-a025-2644e86fe00d-logs" (OuterVolumeSpecName: "logs") pod "547994a2-f3d5-4ac9-a025-2644e86fe00d" (UID: "547994a2-f3d5-4ac9-a025-2644e86fe00d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.365261 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-config-data" (OuterVolumeSpecName: "config-data") pod "fe9b9169-ab54-46ee-acb5-d1dc0047e59c" (UID: "fe9b9169-ab54-46ee-acb5-d1dc0047e59c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.367347 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-kube-api-access-jnppg" (OuterVolumeSpecName: "kube-api-access-jnppg") pod "fe9b9169-ab54-46ee-acb5-d1dc0047e59c" (UID: "fe9b9169-ab54-46ee-acb5-d1dc0047e59c"). InnerVolumeSpecName "kube-api-access-jnppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.369033 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fe9b9169-ab54-46ee-acb5-d1dc0047e59c" (UID: "fe9b9169-ab54-46ee-acb5-d1dc0047e59c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.369358 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547994a2-f3d5-4ac9-a025-2644e86fe00d-kube-api-access-7gl4q" (OuterVolumeSpecName: "kube-api-access-7gl4q") pod "547994a2-f3d5-4ac9-a025-2644e86fe00d" (UID: "547994a2-f3d5-4ac9-a025-2644e86fe00d"). InnerVolumeSpecName "kube-api-access-7gl4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.369637 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547994a2-f3d5-4ac9-a025-2644e86fe00d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "547994a2-f3d5-4ac9-a025-2644e86fe00d" (UID: "547994a2-f3d5-4ac9-a025-2644e86fe00d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.414586 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74569d8966-5sjxs"] Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.426482 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5949fbc84f-vdxjp"] Jan 30 08:28:34 crc kubenswrapper[4870]: W0130 08:28:34.430235 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1872a14d_aeff_46f7_8430_c6fe0eb6973b.slice/crio-f7250a53827f362fa55ae4df1436ef860d73a09ab3dfd65756154cbbf24973a7 WatchSource:0}: Error finding container f7250a53827f362fa55ae4df1436ef860d73a09ab3dfd65756154cbbf24973a7: Status 404 returned error can't find the container with id f7250a53827f362fa55ae4df1436ef860d73a09ab3dfd65756154cbbf24973a7 Jan 30 08:28:34 crc kubenswrapper[4870]: W0130 08:28:34.431352 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4171155c_1d8c_48a0_9675_1c730f9130dc.slice/crio-3cd56b7866047c6a3192e1f27cec1f489317b2b2d8b6d0b806475464e27ca26f WatchSource:0}: Error finding container 3cd56b7866047c6a3192e1f27cec1f489317b2b2d8b6d0b806475464e27ca26f: Status 404 returned error can't find the container with id 3cd56b7866047c6a3192e1f27cec1f489317b2b2d8b6d0b806475464e27ca26f Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467058 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467095 4870 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/547994a2-f3d5-4ac9-a025-2644e86fe00d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467105 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnppg\" (UniqueName: \"kubernetes.io/projected/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-kube-api-access-jnppg\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467114 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467123 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467131 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/547994a2-f3d5-4ac9-a025-2644e86fe00d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467139 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gl4q\" (UniqueName: \"kubernetes.io/projected/547994a2-f3d5-4ac9-a025-2644e86fe00d-kube-api-access-7gl4q\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467149 4870 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe9b9169-ab54-46ee-acb5-d1dc0047e59c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467157 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.467165 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/547994a2-f3d5-4ac9-a025-2644e86fe00d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.557073 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-769d7654db-gw44c"] Jan 30 08:28:34 crc kubenswrapper[4870]: I0130 08:28:34.610948 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g4m9m"] Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.186664 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769d7654db-gw44c" event={"ID":"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2","Type":"ContainerStarted","Data":"cfc50cfb0483cb8c170c6df31ba4eb709bb2e1c2c8750dc908b1c4114238e8fc"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.187039 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769d7654db-gw44c" event={"ID":"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2","Type":"ContainerStarted","Data":"97d8f52cba6eb7fd327935fabe36ebafa5dc36db888e1cb7a1e601511d9a23b2"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.192424 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g4m9m" event={"ID":"b9b91a69-f8ad-4d1d-a47d-c1921071c71a","Type":"ContainerStarted","Data":"091f6d41669e606ed42188e0c975f67619382a546808d176937498d135759acb"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.193911 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f","Type":"ContainerStarted","Data":"5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.196675 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74569d8966-5sjxs" event={"ID":"1872a14d-aeff-46f7-8430-c6fe0eb6973b","Type":"ContainerStarted","Data":"43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.196716 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74569d8966-5sjxs" event={"ID":"1872a14d-aeff-46f7-8430-c6fe0eb6973b","Type":"ContainerStarted","Data":"f7250a53827f362fa55ae4df1436ef860d73a09ab3dfd65756154cbbf24973a7"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.203626 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5949fbc84f-vdxjp" event={"ID":"4171155c-1d8c-48a0-9675-1c730f9130dc","Type":"ContainerStarted","Data":"57f324a7af1ed982422d86d531cc1073fc7e06667530fd2daf47081062016e35"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.203674 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5949fbc84f-vdxjp" event={"ID":"4171155c-1d8c-48a0-9675-1c730f9130dc","Type":"ContainerStarted","Data":"3cd56b7866047c6a3192e1f27cec1f489317b2b2d8b6d0b806475464e27ca26f"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.205846 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" event={"ID":"7fc2a1f3-54bc-4554-a413-69bc35b58a2f","Type":"ContainerStarted","Data":"893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.206776 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.208719 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3be4280c-f244-49ee-8731-bf39ac51ee1e","Type":"ContainerStarted","Data":"2440043bd103b6c3935a639d98298276ce819674a69c89d3a193f378017291b1"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.208832 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api-log" containerID="cri-o://4613e64cbab9e5f1e4c63d6f62c12ca1aa12d4d56ce06a57b13b9b4dcc74559f" gracePeriod=30 Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.210993 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api" containerID="cri-o://2440043bd103b6c3935a639d98298276ce819674a69c89d3a193f378017291b1" gracePeriod=30 Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.211060 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.217275 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d56cb75f7-5b6cr" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.223237 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": EOF" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.226466 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b57k5" event={"ID":"1435e0c6-e24a-44d4-bf78-3e5300e23cdd","Type":"ContainerStarted","Data":"ac1cfe0654d6d9f59d0d7bba982a578597204c7a7dcbab5f91122bf878031c77"} Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.227223 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb6b548c7-56kg5" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.234203 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=5.347058933 podStartE2EDuration="41.234179717s" podCreationTimestamp="2026-01-30 08:27:54 +0000 UTC" firstStartedPulling="2026-01-30 08:27:56.385979362 +0000 UTC m=+1115.081526461" lastFinishedPulling="2026-01-30 08:28:32.273100126 +0000 UTC m=+1150.968647245" observedRunningTime="2026-01-30 08:28:35.226845269 +0000 UTC m=+1153.922392378" watchObservedRunningTime="2026-01-30 08:28:35.234179717 +0000 UTC m=+1153.929726836" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.256068 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-b57k5" podStartSLOduration=3.255683937 podStartE2EDuration="40.256045817s" podCreationTimestamp="2026-01-30 08:27:55 +0000 UTC" firstStartedPulling="2026-01-30 08:27:57.045217666 +0000 UTC m=+1115.740764775" lastFinishedPulling="2026-01-30 08:28:34.045579546 +0000 UTC m=+1152.741126655" observedRunningTime="2026-01-30 08:28:35.243464346 +0000 UTC m=+1153.939011465" watchObservedRunningTime="2026-01-30 08:28:35.256045817 +0000 UTC m=+1153.951592946" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.279684 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" podStartSLOduration=40.279667391 podStartE2EDuration="40.279667391s" podCreationTimestamp="2026-01-30 08:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:35.275372117 +0000 UTC m=+1153.970919226" watchObservedRunningTime="2026-01-30 08:28:35.279667391 +0000 UTC m=+1153.975214500" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.299495 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=40.299480387 podStartE2EDuration="40.299480387s" podCreationTimestamp="2026-01-30 08:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:35.294293955 +0000 UTC m=+1153.989841064" watchObservedRunningTime="2026-01-30 08:28:35.299480387 +0000 UTC m=+1153.995027496" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.388277 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fb6b548c7-56kg5"] Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.406379 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fb6b548c7-56kg5"] Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.421594 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d56cb75f7-5b6cr"] Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.430131 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d56cb75f7-5b6cr"] Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.651184 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.654077 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 30 08:28:35 crc kubenswrapper[4870]: I0130 08:28:35.684556 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.092836 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547994a2-f3d5-4ac9-a025-2644e86fe00d" path="/var/lib/kubelet/pods/547994a2-f3d5-4ac9-a025-2644e86fe00d/volumes" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.094303 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9b9169-ab54-46ee-acb5-d1dc0047e59c" path="/var/lib/kubelet/pods/fe9b9169-ab54-46ee-acb5-d1dc0047e59c/volumes" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.232834 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74569d8966-5sjxs" event={"ID":"1872a14d-aeff-46f7-8430-c6fe0eb6973b","Type":"ContainerStarted","Data":"960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3"} Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.239270 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5949fbc84f-vdxjp" event={"ID":"4171155c-1d8c-48a0-9675-1c730f9130dc","Type":"ContainerStarted","Data":"8001ca067558561639146186888f3fefa9a3f66b8cfe6da27c20754262532feb"} Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.239422 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5949fbc84f-vdxjp" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon-log" containerID="cri-o://57f324a7af1ed982422d86d531cc1073fc7e06667530fd2daf47081062016e35" gracePeriod=30 Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.239478 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5949fbc84f-vdxjp" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon" containerID="cri-o://8001ca067558561639146186888f3fefa9a3f66b8cfe6da27c20754262532feb" gracePeriod=30 Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.267902 4870 generic.go:334] "Generic (PLEG): container finished" podID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerID="4613e64cbab9e5f1e4c63d6f62c12ca1aa12d4d56ce06a57b13b9b4dcc74559f" exitCode=143 Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.267983 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3be4280c-f244-49ee-8731-bf39ac51ee1e","Type":"ContainerDied","Data":"4613e64cbab9e5f1e4c63d6f62c12ca1aa12d4d56ce06a57b13b9b4dcc74559f"} Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.318819 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5949fbc84f-vdxjp" podStartSLOduration=39.159902825 podStartE2EDuration="39.318800955s" podCreationTimestamp="2026-01-30 08:27:57 +0000 UTC" firstStartedPulling="2026-01-30 08:28:34.434012482 +0000 UTC m=+1153.129559581" lastFinishedPulling="2026-01-30 08:28:34.592910602 +0000 UTC m=+1153.288457711" observedRunningTime="2026-01-30 08:28:36.310566458 +0000 UTC m=+1155.006113567" watchObservedRunningTime="2026-01-30 08:28:36.318800955 +0000 UTC m=+1155.014348064" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.322305 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769d7654db-gw44c" event={"ID":"b6c9337c-50ce-4c5c-a84f-8092d25fa1e2","Type":"ContainerStarted","Data":"90c555109c0c2fc282c1e031c1ed41dac3b5e5d7868f70671a0deb315bc10fb6"} Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.326108 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g4m9m" event={"ID":"b9b91a69-f8ad-4d1d-a47d-c1921071c71a","Type":"ContainerStarted","Data":"423e4f8207599a836d08eca85be2c21680c69e731edaed6ac9d59c605d325bfb"} Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.326890 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74569d8966-5sjxs" podStartSLOduration=32.170821835 podStartE2EDuration="32.326863096s" podCreationTimestamp="2026-01-30 08:28:04 +0000 UTC" firstStartedPulling="2026-01-30 08:28:34.434295881 +0000 UTC m=+1153.129842990" lastFinishedPulling="2026-01-30 08:28:34.590337142 +0000 UTC m=+1153.285884251" observedRunningTime="2026-01-30 08:28:36.2581884 +0000 UTC m=+1154.953735509" watchObservedRunningTime="2026-01-30 08:28:36.326863096 +0000 UTC m=+1155.022410205" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.343786 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tssp8" event={"ID":"edd09a42-14b6-4161-ba2a-82c4cf4f5983","Type":"ContainerStarted","Data":"85f1049088e388e69d6da33f4eab9143943bc4d4ba2179d9093657152d474310"} Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.345040 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.366658 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-769d7654db-gw44c" podStartSLOduration=32.366639391 podStartE2EDuration="32.366639391s" podCreationTimestamp="2026-01-30 08:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:36.352585014 +0000 UTC m=+1155.048132123" watchObservedRunningTime="2026-01-30 08:28:36.366639391 +0000 UTC m=+1155.062186500" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.416731 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.421508 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tssp8" podStartSLOduration=4.644066308 podStartE2EDuration="48.421491697s" podCreationTimestamp="2026-01-30 08:27:48 +0000 UTC" firstStartedPulling="2026-01-30 08:27:50.273078351 +0000 UTC m=+1108.968625460" lastFinishedPulling="2026-01-30 08:28:34.05050374 +0000 UTC m=+1152.746050849" observedRunningTime="2026-01-30 08:28:36.411990391 +0000 UTC m=+1155.107537510" watchObservedRunningTime="2026-01-30 08:28:36.421491697 +0000 UTC m=+1155.117038806" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.422083 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g4m9m" podStartSLOduration=11.422077786 podStartE2EDuration="11.422077786s" podCreationTimestamp="2026-01-30 08:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:36.391247347 +0000 UTC m=+1155.086794466" watchObservedRunningTime="2026-01-30 08:28:36.422077786 +0000 UTC m=+1155.117624895" Jan 30 08:28:36 crc kubenswrapper[4870]: I0130 08:28:36.477848 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:28:37 crc kubenswrapper[4870]: I0130 08:28:37.814793 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:28:38 crc kubenswrapper[4870]: I0130 08:28:38.360334 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" containerName="watcher-decision-engine" containerID="cri-o://5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24" gracePeriod=30 Jan 30 08:28:38 crc kubenswrapper[4870]: I0130 08:28:38.967548 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": read tcp 10.217.0.2:50244->10.217.0.152:9322: read: connection reset by peer" Jan 30 08:28:39 crc kubenswrapper[4870]: I0130 08:28:39.371273 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerStarted","Data":"b45a58a1e3e4865b397313616e6494da5d8e1887dd9401a657303e526b984274"} Jan 30 08:28:39 crc kubenswrapper[4870]: I0130 08:28:39.373166 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9g27p" event={"ID":"685bde78-dea1-4864-a825-af176178bd11","Type":"ContainerStarted","Data":"c723dc182803022ba9e618ac6407cbccb617a7c5a0a43457386f580c7a154614"} Jan 30 08:28:39 crc kubenswrapper[4870]: I0130 08:28:39.377227 4870 generic.go:334] "Generic (PLEG): container finished" podID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerID="2440043bd103b6c3935a639d98298276ce819674a69c89d3a193f378017291b1" exitCode=0 Jan 30 08:28:39 crc kubenswrapper[4870]: I0130 08:28:39.377266 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3be4280c-f244-49ee-8731-bf39ac51ee1e","Type":"ContainerDied","Data":"2440043bd103b6c3935a639d98298276ce819674a69c89d3a193f378017291b1"} Jan 30 08:28:39 crc kubenswrapper[4870]: I0130 08:28:39.409936 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9g27p" podStartSLOduration=2.847636672 podStartE2EDuration="44.409917018s" podCreationTimestamp="2026-01-30 08:27:55 +0000 UTC" firstStartedPulling="2026-01-30 08:27:56.700961035 +0000 UTC m=+1115.396508144" lastFinishedPulling="2026-01-30 08:28:38.263241371 +0000 UTC m=+1156.958788490" observedRunningTime="2026-01-30 08:28:39.400682401 +0000 UTC m=+1158.096229510" watchObservedRunningTime="2026-01-30 08:28:39.409917018 +0000 UTC m=+1158.105464137" Jan 30 08:28:39 crc kubenswrapper[4870]: I0130 08:28:39.899012 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.013330 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be4280c-f244-49ee-8731-bf39ac51ee1e-logs\") pod \"3be4280c-f244-49ee-8731-bf39ac51ee1e\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.013375 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-custom-prometheus-ca\") pod \"3be4280c-f244-49ee-8731-bf39ac51ee1e\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.014310 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be4280c-f244-49ee-8731-bf39ac51ee1e-logs" (OuterVolumeSpecName: "logs") pod "3be4280c-f244-49ee-8731-bf39ac51ee1e" (UID: "3be4280c-f244-49ee-8731-bf39ac51ee1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.014349 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-combined-ca-bundle\") pod \"3be4280c-f244-49ee-8731-bf39ac51ee1e\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.014499 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-config-data\") pod \"3be4280c-f244-49ee-8731-bf39ac51ee1e\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.014549 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gsr9\" (UniqueName: \"kubernetes.io/projected/3be4280c-f244-49ee-8731-bf39ac51ee1e-kube-api-access-2gsr9\") pod \"3be4280c-f244-49ee-8731-bf39ac51ee1e\" (UID: \"3be4280c-f244-49ee-8731-bf39ac51ee1e\") " Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.014941 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be4280c-f244-49ee-8731-bf39ac51ee1e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.019855 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be4280c-f244-49ee-8731-bf39ac51ee1e-kube-api-access-2gsr9" (OuterVolumeSpecName: "kube-api-access-2gsr9") pod "3be4280c-f244-49ee-8731-bf39ac51ee1e" (UID: "3be4280c-f244-49ee-8731-bf39ac51ee1e"). InnerVolumeSpecName "kube-api-access-2gsr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.046979 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3be4280c-f244-49ee-8731-bf39ac51ee1e" (UID: "3be4280c-f244-49ee-8731-bf39ac51ee1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.050509 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3be4280c-f244-49ee-8731-bf39ac51ee1e" (UID: "3be4280c-f244-49ee-8731-bf39ac51ee1e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.091819 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-config-data" (OuterVolumeSpecName: "config-data") pod "3be4280c-f244-49ee-8731-bf39ac51ee1e" (UID: "3be4280c-f244-49ee-8731-bf39ac51ee1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.116917 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.116946 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.116955 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gsr9\" (UniqueName: \"kubernetes.io/projected/3be4280c-f244-49ee-8731-bf39ac51ee1e-kube-api-access-2gsr9\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.116966 4870 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3be4280c-f244-49ee-8731-bf39ac51ee1e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.409213 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3be4280c-f244-49ee-8731-bf39ac51ee1e","Type":"ContainerDied","Data":"ce3f73b3878e9503e479cb8deefa9a49d72c579fcd3b7d49136ba600b5e48a5d"} Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.409268 4870 scope.go:117] "RemoveContainer" containerID="2440043bd103b6c3935a639d98298276ce819674a69c89d3a193f378017291b1" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.409417 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.422812 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d501bb9c-d88d-4362-a48e-4d0347ecc90e","Type":"ContainerStarted","Data":"ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc"} Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.430476 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.453737 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.487424 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:40 crc kubenswrapper[4870]: E0130 08:28:40.487905 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.487929 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api" Jan 30 08:28:40 crc kubenswrapper[4870]: E0130 08:28:40.487951 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api-log" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.487959 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api-log" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.488512 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.488543 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" containerName="watcher-api-log" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.489692 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.493071 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.502192 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.319949566 podStartE2EDuration="46.502171953s" podCreationTimestamp="2026-01-30 08:27:54 +0000 UTC" firstStartedPulling="2026-01-30 08:27:56.279812662 +0000 UTC m=+1114.975359771" lastFinishedPulling="2026-01-30 08:28:39.462035049 +0000 UTC m=+1158.157582158" observedRunningTime="2026-01-30 08:28:40.450324602 +0000 UTC m=+1159.145871731" watchObservedRunningTime="2026-01-30 08:28:40.502171953 +0000 UTC m=+1159.197719072" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.502691 4870 scope.go:117] "RemoveContainer" containerID="4613e64cbab9e5f1e4c63d6f62c12ca1aa12d4d56ce06a57b13b9b4dcc74559f" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.520827 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.527056 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.527129 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkg6n\" (UniqueName: \"kubernetes.io/projected/b63b26d0-7049-490c-97dc-117bbbf5fa01-kube-api-access-kkg6n\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.527163 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-config-data\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.527266 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.527356 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63b26d0-7049-490c-97dc-117bbbf5fa01-logs\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.576959 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.629058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63b26d0-7049-490c-97dc-117bbbf5fa01-logs\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.629150 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.629188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkg6n\" (UniqueName: \"kubernetes.io/projected/b63b26d0-7049-490c-97dc-117bbbf5fa01-kube-api-access-kkg6n\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.629228 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-config-data\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.629298 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.629543 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63b26d0-7049-490c-97dc-117bbbf5fa01-logs\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.639689 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-config-data\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.641581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.642523 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.702785 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkg6n\" (UniqueName: \"kubernetes.io/projected/b63b26d0-7049-490c-97dc-117bbbf5fa01-kube-api-access-kkg6n\") pod \"watcher-api-0\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.820912 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.898102 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.965007 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-wq2nt"] Jan 30 08:28:40 crc kubenswrapper[4870]: I0130 08:28:40.965240 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerName="dnsmasq-dns" containerID="cri-o://72ef45c7c24f8c5fc6788a5862241b333000ac3fd13dcb6350ad2553d12d13f8" gracePeriod=10 Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.432000 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.456371 4870 generic.go:334] "Generic (PLEG): container finished" podID="b9b91a69-f8ad-4d1d-a47d-c1921071c71a" containerID="423e4f8207599a836d08eca85be2c21680c69e731edaed6ac9d59c605d325bfb" exitCode=0 Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.456475 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g4m9m" event={"ID":"b9b91a69-f8ad-4d1d-a47d-c1921071c71a","Type":"ContainerDied","Data":"423e4f8207599a836d08eca85be2c21680c69e731edaed6ac9d59c605d325bfb"} Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.476272 4870 generic.go:334] "Generic (PLEG): container finished" podID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerID="72ef45c7c24f8c5fc6788a5862241b333000ac3fd13dcb6350ad2553d12d13f8" exitCode=0 Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.477037 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" event={"ID":"c409417e-6b71-491c-b7c5-bf1a2b63baed","Type":"ContainerDied","Data":"72ef45c7c24f8c5fc6788a5862241b333000ac3fd13dcb6350ad2553d12d13f8"} Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.659541 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.761423 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-swift-storage-0\") pod \"c409417e-6b71-491c-b7c5-bf1a2b63baed\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.761464 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mpz4\" (UniqueName: \"kubernetes.io/projected/c409417e-6b71-491c-b7c5-bf1a2b63baed-kube-api-access-5mpz4\") pod \"c409417e-6b71-491c-b7c5-bf1a2b63baed\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.761525 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-config\") pod \"c409417e-6b71-491c-b7c5-bf1a2b63baed\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.761560 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-sb\") pod \"c409417e-6b71-491c-b7c5-bf1a2b63baed\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.761637 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-nb\") pod \"c409417e-6b71-491c-b7c5-bf1a2b63baed\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.762345 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-svc\") pod \"c409417e-6b71-491c-b7c5-bf1a2b63baed\" (UID: \"c409417e-6b71-491c-b7c5-bf1a2b63baed\") " Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.784510 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c409417e-6b71-491c-b7c5-bf1a2b63baed-kube-api-access-5mpz4" (OuterVolumeSpecName: "kube-api-access-5mpz4") pod "c409417e-6b71-491c-b7c5-bf1a2b63baed" (UID: "c409417e-6b71-491c-b7c5-bf1a2b63baed"). InnerVolumeSpecName "kube-api-access-5mpz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.826888 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c409417e-6b71-491c-b7c5-bf1a2b63baed" (UID: "c409417e-6b71-491c-b7c5-bf1a2b63baed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.838371 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c409417e-6b71-491c-b7c5-bf1a2b63baed" (UID: "c409417e-6b71-491c-b7c5-bf1a2b63baed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.847321 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-config" (OuterVolumeSpecName: "config") pod "c409417e-6b71-491c-b7c5-bf1a2b63baed" (UID: "c409417e-6b71-491c-b7c5-bf1a2b63baed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.859129 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c409417e-6b71-491c-b7c5-bf1a2b63baed" (UID: "c409417e-6b71-491c-b7c5-bf1a2b63baed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.862394 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c409417e-6b71-491c-b7c5-bf1a2b63baed" (UID: "c409417e-6b71-491c-b7c5-bf1a2b63baed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.863845 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.863883 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mpz4\" (UniqueName: \"kubernetes.io/projected/c409417e-6b71-491c-b7c5-bf1a2b63baed-kube-api-access-5mpz4\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.863896 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.863908 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.863919 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:41 crc kubenswrapper[4870]: I0130 08:28:41.863928 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c409417e-6b71-491c-b7c5-bf1a2b63baed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.093807 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be4280c-f244-49ee-8731-bf39ac51ee1e" path="/var/lib/kubelet/pods/3be4280c-f244-49ee-8731-bf39ac51ee1e/volumes" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.497430 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" event={"ID":"c409417e-6b71-491c-b7c5-bf1a2b63baed","Type":"ContainerDied","Data":"27e7324b5df73e3d0d4ead3bf3867e6cd08ef4e1f33f8795d331a5b682f586af"} Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.497702 4870 scope.go:117] "RemoveContainer" containerID="72ef45c7c24f8c5fc6788a5862241b333000ac3fd13dcb6350ad2553d12d13f8" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.497470 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757cc9679f-wq2nt" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.508252 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b63b26d0-7049-490c-97dc-117bbbf5fa01","Type":"ContainerStarted","Data":"be9af03cdaac6d198c36c22f6a72da93c6a8876d7522a083b84ad37b4c4205a3"} Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.508300 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.508311 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b63b26d0-7049-490c-97dc-117bbbf5fa01","Type":"ContainerStarted","Data":"2f2cc7b48ae21cc95e0db1fb5e108ac03cb5f4ea23904284d0731df4d6012673"} Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.508320 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b63b26d0-7049-490c-97dc-117bbbf5fa01","Type":"ContainerStarted","Data":"e929fb45514c18c709b7cf772f7d4133121a7d17d636aed37d87cc9bcf50a23c"} Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.525897 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-wq2nt"] Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.532762 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757cc9679f-wq2nt"] Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.538316 4870 scope.go:117] "RemoveContainer" containerID="0bc8a7090e4dbce528560fe9634da361240a61440f10f7e3c579fda9915e352a" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.541632 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.541613354 podStartE2EDuration="2.541613354s" podCreationTimestamp="2026-01-30 08:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:42.539316332 +0000 UTC m=+1161.234863451" watchObservedRunningTime="2026-01-30 08:28:42.541613354 +0000 UTC m=+1161.237160463" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.895331 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.984949 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-scripts\") pod \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.985296 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-combined-ca-bundle\") pod \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.985317 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-fernet-keys\") pod \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.985338 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-credential-keys\") pod \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.985370 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t66xr\" (UniqueName: \"kubernetes.io/projected/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-kube-api-access-t66xr\") pod \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.985398 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-config-data\") pod \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\" (UID: \"b9b91a69-f8ad-4d1d-a47d-c1921071c71a\") " Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.993226 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b9b91a69-f8ad-4d1d-a47d-c1921071c71a" (UID: "b9b91a69-f8ad-4d1d-a47d-c1921071c71a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:42 crc kubenswrapper[4870]: I0130 08:28:42.993250 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-kube-api-access-t66xr" (OuterVolumeSpecName: "kube-api-access-t66xr") pod "b9b91a69-f8ad-4d1d-a47d-c1921071c71a" (UID: "b9b91a69-f8ad-4d1d-a47d-c1921071c71a"). InnerVolumeSpecName "kube-api-access-t66xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.006596 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9b91a69-f8ad-4d1d-a47d-c1921071c71a" (UID: "b9b91a69-f8ad-4d1d-a47d-c1921071c71a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.006926 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-scripts" (OuterVolumeSpecName: "scripts") pod "b9b91a69-f8ad-4d1d-a47d-c1921071c71a" (UID: "b9b91a69-f8ad-4d1d-a47d-c1921071c71a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.014004 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9b91a69-f8ad-4d1d-a47d-c1921071c71a" (UID: "b9b91a69-f8ad-4d1d-a47d-c1921071c71a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.014577 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-config-data" (OuterVolumeSpecName: "config-data") pod "b9b91a69-f8ad-4d1d-a47d-c1921071c71a" (UID: "b9b91a69-f8ad-4d1d-a47d-c1921071c71a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.087023 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.087061 4870 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.087074 4870 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.087087 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t66xr\" (UniqueName: \"kubernetes.io/projected/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-kube-api-access-t66xr\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.087099 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.087109 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b91a69-f8ad-4d1d-a47d-c1921071c71a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.526339 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g4m9m" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.526339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g4m9m" event={"ID":"b9b91a69-f8ad-4d1d-a47d-c1921071c71a","Type":"ContainerDied","Data":"091f6d41669e606ed42188e0c975f67619382a546808d176937498d135759acb"} Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.526385 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="091f6d41669e606ed42188e0c975f67619382a546808d176937498d135759acb" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.733918 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-55b585f57f-9h2lg"] Jan 30 08:28:43 crc kubenswrapper[4870]: E0130 08:28:43.734440 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b91a69-f8ad-4d1d-a47d-c1921071c71a" containerName="keystone-bootstrap" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.734461 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b91a69-f8ad-4d1d-a47d-c1921071c71a" containerName="keystone-bootstrap" Jan 30 08:28:43 crc kubenswrapper[4870]: E0130 08:28:43.734475 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerName="init" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.734483 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerName="init" Jan 30 08:28:43 crc kubenswrapper[4870]: E0130 08:28:43.734507 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerName="dnsmasq-dns" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.734516 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerName="dnsmasq-dns" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.734744 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" containerName="dnsmasq-dns" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.734759 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b91a69-f8ad-4d1d-a47d-c1921071c71a" containerName="keystone-bootstrap" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.735653 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.741430 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.741460 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.741669 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.741780 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vn7b5" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.745927 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.746258 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.746586 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55b585f57f-9h2lg"] Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800383 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-combined-ca-bundle\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800451 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-internal-tls-certs\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800571 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-scripts\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800643 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-public-tls-certs\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800685 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dpph\" (UniqueName: \"kubernetes.io/projected/cb9f4cfa-0698-47dd-9319-47b185d2f937-kube-api-access-6dpph\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800716 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-config-data\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800742 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-credential-keys\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.800775 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-fernet-keys\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.902328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-public-tls-certs\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.902385 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dpph\" (UniqueName: \"kubernetes.io/projected/cb9f4cfa-0698-47dd-9319-47b185d2f937-kube-api-access-6dpph\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.902414 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-config-data\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.902436 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-credential-keys\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.902461 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-fernet-keys\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.902492 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-combined-ca-bundle\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.903427 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-internal-tls-certs\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.903527 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-scripts\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.907918 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-internal-tls-certs\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.908381 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-fernet-keys\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.908421 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-credential-keys\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.909319 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-public-tls-certs\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.909559 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-config-data\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.910048 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-scripts\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.910113 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9f4cfa-0698-47dd-9319-47b185d2f937-combined-ca-bundle\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:43 crc kubenswrapper[4870]: I0130 08:28:43.943052 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dpph\" (UniqueName: \"kubernetes.io/projected/cb9f4cfa-0698-47dd-9319-47b185d2f937-kube-api-access-6dpph\") pod \"keystone-55b585f57f-9h2lg\" (UID: \"cb9f4cfa-0698-47dd-9319-47b185d2f937\") " pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.060299 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.086863 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c409417e-6b71-491c-b7c5-bf1a2b63baed" path="/var/lib/kubelet/pods/c409417e-6b71-491c-b7c5-bf1a2b63baed/volumes" Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.650401 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55b585f57f-9h2lg"] Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.714584 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.714947 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.815755 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:44 crc kubenswrapper[4870]: I0130 08:28:44.816655 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.520550 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.528521 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.547899 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.555948 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55b585f57f-9h2lg" event={"ID":"cb9f4cfa-0698-47dd-9319-47b185d2f937","Type":"ContainerStarted","Data":"4324809eb746d26c074eeb09c271b7fa56245b6d95f28298ee1ac9c6cd8c371f"} Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.588678 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.631493 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:28:45 crc kubenswrapper[4870]: I0130 08:28:45.821777 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 30 08:28:47 crc kubenswrapper[4870]: I0130 08:28:47.570706 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" containerID="cri-o://ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" gracePeriod=30 Jan 30 08:28:50 crc kubenswrapper[4870]: E0130 08:28:50.523369 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:28:50 crc kubenswrapper[4870]: E0130 08:28:50.525388 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:28:50 crc kubenswrapper[4870]: E0130 08:28:50.526676 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:28:50 crc kubenswrapper[4870]: E0130 08:28:50.526734 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:28:50 crc kubenswrapper[4870]: I0130 08:28:50.604386 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55b585f57f-9h2lg" event={"ID":"cb9f4cfa-0698-47dd-9319-47b185d2f937","Type":"ContainerStarted","Data":"e8483b8d3774b1165f5fdf771316420579c648c24476ba3c43ca752d0dec0955"} Jan 30 08:28:50 crc kubenswrapper[4870]: I0130 08:28:50.604898 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:28:50 crc kubenswrapper[4870]: I0130 08:28:50.623582 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-55b585f57f-9h2lg" podStartSLOduration=7.623564484 podStartE2EDuration="7.623564484s" podCreationTimestamp="2026-01-30 08:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:50.62188745 +0000 UTC m=+1169.317434569" watchObservedRunningTime="2026-01-30 08:28:50.623564484 +0000 UTC m=+1169.319111593" Jan 30 08:28:50 crc kubenswrapper[4870]: I0130 08:28:50.821971 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 30 08:28:50 crc kubenswrapper[4870]: I0130 08:28:50.825484 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 30 08:28:51 crc kubenswrapper[4870]: I0130 08:28:51.622700 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 30 08:28:52 crc kubenswrapper[4870]: I0130 08:28:52.640511 4870 generic.go:334] "Generic (PLEG): container finished" podID="1435e0c6-e24a-44d4-bf78-3e5300e23cdd" containerID="ac1cfe0654d6d9f59d0d7bba982a578597204c7a7dcbab5f91122bf878031c77" exitCode=0 Jan 30 08:28:52 crc kubenswrapper[4870]: I0130 08:28:52.640579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b57k5" event={"ID":"1435e0c6-e24a-44d4-bf78-3e5300e23cdd","Type":"ContainerDied","Data":"ac1cfe0654d6d9f59d0d7bba982a578597204c7a7dcbab5f91122bf878031c77"} Jan 30 08:28:52 crc kubenswrapper[4870]: I0130 08:28:52.642749 4870 generic.go:334] "Generic (PLEG): container finished" podID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" containerID="5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24" exitCode=1 Jan 30 08:28:52 crc kubenswrapper[4870]: I0130 08:28:52.642798 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f","Type":"ContainerDied","Data":"5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24"} Jan 30 08:28:54 crc kubenswrapper[4870]: I0130 08:28:54.510587 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:54 crc kubenswrapper[4870]: I0130 08:28:54.516421 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api-log" containerID="cri-o://2f2cc7b48ae21cc95e0db1fb5e108ac03cb5f4ea23904284d0731df4d6012673" gracePeriod=30 Jan 30 08:28:54 crc kubenswrapper[4870]: I0130 08:28:54.516578 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api" containerID="cri-o://be9af03cdaac6d198c36c22f6a72da93c6a8876d7522a083b84ad37b4c4205a3" gracePeriod=30 Jan 30 08:28:54 crc kubenswrapper[4870]: I0130 08:28:54.659938 4870 generic.go:334] "Generic (PLEG): container finished" podID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerID="2f2cc7b48ae21cc95e0db1fb5e108ac03cb5f4ea23904284d0731df4d6012673" exitCode=143 Jan 30 08:28:54 crc kubenswrapper[4870]: I0130 08:28:54.659966 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b63b26d0-7049-490c-97dc-117bbbf5fa01","Type":"ContainerDied","Data":"2f2cc7b48ae21cc95e0db1fb5e108ac03cb5f4ea23904284d0731df4d6012673"} Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.250246 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.250495 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.250533 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.251201 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"736e1ea4b0b2b4fa81dc9ec4fa9950e05f221b62734f7e8de7d7969e9158f7ae"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.251253 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://736e1ea4b0b2b4fa81dc9ec4fa9950e05f221b62734f7e8de7d7969e9158f7ae" gracePeriod=600 Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.521990 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.523579 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.525034 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.525096 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.652543 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24 is running failed: container process not found" containerID="5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.654267 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24 is running failed: container process not found" containerID="5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.654606 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24 is running failed: container process not found" containerID="5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 30 08:28:55 crc kubenswrapper[4870]: E0130 08:28:55.654674 4870 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" containerName="watcher-decision-engine" Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.669016 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="736e1ea4b0b2b4fa81dc9ec4fa9950e05f221b62734f7e8de7d7969e9158f7ae" exitCode=0 Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.669057 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"736e1ea4b0b2b4fa81dc9ec4fa9950e05f221b62734f7e8de7d7969e9158f7ae"} Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.669091 4870 scope.go:117] "RemoveContainer" containerID="902c9bb8b96922377d8d6da6fb79b392e9bc4a710daf7c3c1a77d5b9c2b536ac" Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.821614 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Jan 30 08:28:55 crc kubenswrapper[4870]: I0130 08:28:55.821679 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.689383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b57k5" event={"ID":"1435e0c6-e24a-44d4-bf78-3e5300e23cdd","Type":"ContainerDied","Data":"09157ba4d27c88f05b3bcbf87559e2f7cd18de58cca7f08d086b19254b605ef0"} Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.689824 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09157ba4d27c88f05b3bcbf87559e2f7cd18de58cca7f08d086b19254b605ef0" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.692204 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f","Type":"ContainerDied","Data":"688a235f4ea3f6827668ea12cec3c801c91b77e9c743386ea44f9613e759f338"} Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.692242 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688a235f4ea3f6827668ea12cec3c801c91b77e9c743386ea44f9613e759f338" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.694964 4870 generic.go:334] "Generic (PLEG): container finished" podID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerID="be9af03cdaac6d198c36c22f6a72da93c6a8876d7522a083b84ad37b4c4205a3" exitCode=0 Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.694999 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b63b26d0-7049-490c-97dc-117bbbf5fa01","Type":"ContainerDied","Data":"be9af03cdaac6d198c36c22f6a72da93c6a8876d7522a083b84ad37b4c4205a3"} Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.736045 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.791833 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b57k5" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.818464 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-combined-ca-bundle\") pod \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.818596 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sskqz\" (UniqueName: \"kubernetes.io/projected/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-kube-api-access-sskqz\") pod \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.818633 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-config-data\") pod \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.818666 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-logs\") pod \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.818740 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-custom-prometheus-ca\") pod \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\" (UID: \"8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.825600 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-logs" (OuterVolumeSpecName: "logs") pod "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" (UID: "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.864303 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-kube-api-access-sskqz" (OuterVolumeSpecName: "kube-api-access-sskqz") pod "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" (UID: "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f"). InnerVolumeSpecName "kube-api-access-sskqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.919848 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-combined-ca-bundle\") pod \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.920248 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-scripts\") pod \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.920410 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2pwt\" (UniqueName: \"kubernetes.io/projected/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-kube-api-access-n2pwt\") pod \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.920450 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-config-data\") pod \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.920611 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-logs\") pod \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\" (UID: \"1435e0c6-e24a-44d4-bf78-3e5300e23cdd\") " Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.921105 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sskqz\" (UniqueName: \"kubernetes.io/projected/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-kube-api-access-sskqz\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.921130 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.921488 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-logs" (OuterVolumeSpecName: "logs") pod "1435e0c6-e24a-44d4-bf78-3e5300e23cdd" (UID: "1435e0c6-e24a-44d4-bf78-3e5300e23cdd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.929341 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-kube-api-access-n2pwt" (OuterVolumeSpecName: "kube-api-access-n2pwt") pod "1435e0c6-e24a-44d4-bf78-3e5300e23cdd" (UID: "1435e0c6-e24a-44d4-bf78-3e5300e23cdd"). InnerVolumeSpecName "kube-api-access-n2pwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.933015 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-scripts" (OuterVolumeSpecName: "scripts") pod "1435e0c6-e24a-44d4-bf78-3e5300e23cdd" (UID: "1435e0c6-e24a-44d4-bf78-3e5300e23cdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.950059 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" (UID: "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.953770 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" (UID: "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.978474 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-config-data" (OuterVolumeSpecName: "config-data") pod "1435e0c6-e24a-44d4-bf78-3e5300e23cdd" (UID: "1435e0c6-e24a-44d4-bf78-3e5300e23cdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.978583 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1435e0c6-e24a-44d4-bf78-3e5300e23cdd" (UID: "1435e0c6-e24a-44d4-bf78-3e5300e23cdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:56 crc kubenswrapper[4870]: I0130 08:28:56.996766 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-config-data" (OuterVolumeSpecName: "config-data") pod "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" (UID: "8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.000892 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022668 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022712 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022729 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022741 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022752 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022764 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2pwt\" (UniqueName: \"kubernetes.io/projected/1435e0c6-e24a-44d4-bf78-3e5300e23cdd-kube-api-access-n2pwt\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022779 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.022794 4870 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.128455 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkg6n\" (UniqueName: \"kubernetes.io/projected/b63b26d0-7049-490c-97dc-117bbbf5fa01-kube-api-access-kkg6n\") pod \"b63b26d0-7049-490c-97dc-117bbbf5fa01\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.128559 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63b26d0-7049-490c-97dc-117bbbf5fa01-logs\") pod \"b63b26d0-7049-490c-97dc-117bbbf5fa01\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.128598 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-combined-ca-bundle\") pod \"b63b26d0-7049-490c-97dc-117bbbf5fa01\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.128686 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-custom-prometheus-ca\") pod \"b63b26d0-7049-490c-97dc-117bbbf5fa01\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.128777 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-config-data\") pod \"b63b26d0-7049-490c-97dc-117bbbf5fa01\" (UID: \"b63b26d0-7049-490c-97dc-117bbbf5fa01\") " Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.129064 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b63b26d0-7049-490c-97dc-117bbbf5fa01-logs" (OuterVolumeSpecName: "logs") pod "b63b26d0-7049-490c-97dc-117bbbf5fa01" (UID: "b63b26d0-7049-490c-97dc-117bbbf5fa01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.129432 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63b26d0-7049-490c-97dc-117bbbf5fa01-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.136143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63b26d0-7049-490c-97dc-117bbbf5fa01-kube-api-access-kkg6n" (OuterVolumeSpecName: "kube-api-access-kkg6n") pod "b63b26d0-7049-490c-97dc-117bbbf5fa01" (UID: "b63b26d0-7049-490c-97dc-117bbbf5fa01"). InnerVolumeSpecName "kube-api-access-kkg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.155997 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b63b26d0-7049-490c-97dc-117bbbf5fa01" (UID: "b63b26d0-7049-490c-97dc-117bbbf5fa01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.171967 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b63b26d0-7049-490c-97dc-117bbbf5fa01" (UID: "b63b26d0-7049-490c-97dc-117bbbf5fa01"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.178671 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-config-data" (OuterVolumeSpecName: "config-data") pod "b63b26d0-7049-490c-97dc-117bbbf5fa01" (UID: "b63b26d0-7049-490c-97dc-117bbbf5fa01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.231642 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.231685 4870 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.231712 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63b26d0-7049-490c-97dc-117bbbf5fa01-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.231729 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkg6n\" (UniqueName: \"kubernetes.io/projected/b63b26d0-7049-490c-97dc-117bbbf5fa01-kube-api-access-kkg6n\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.240065 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.253235 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.709652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerStarted","Data":"eb3296110237841074b9a042e7cdf380a70fa9819f62258f821b5124b40eb835"} Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.713609 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2mx7" event={"ID":"c3bd649e-5c3c-495f-933f-3b516167cbd2","Type":"ContainerStarted","Data":"3e1279140ba8261786354ed9fbacfe2a2a43a2b8decaba7ca7c7b15754ed7ff9"} Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.731683 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"fed0dc1b3541c4793a049fc7617c3773e2d05f0ebfb934d3acc5ededede3b844"} Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.735744 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-d2mx7" podStartSLOduration=3.315431998 podStartE2EDuration="1m2.73572564s" podCreationTimestamp="2026-01-30 08:27:55 +0000 UTC" firstStartedPulling="2026-01-30 08:27:57.068697007 +0000 UTC m=+1115.764244116" lastFinishedPulling="2026-01-30 08:28:56.488990629 +0000 UTC m=+1175.184537758" observedRunningTime="2026-01-30 08:28:57.731747755 +0000 UTC m=+1176.427294874" watchObservedRunningTime="2026-01-30 08:28:57.73572564 +0000 UTC m=+1176.431272749" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.738397 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b57k5" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.740231 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.741446 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.741453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b63b26d0-7049-490c-97dc-117bbbf5fa01","Type":"ContainerDied","Data":"e929fb45514c18c709b7cf772f7d4133121a7d17d636aed37d87cc9bcf50a23c"} Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.741519 4870 scope.go:117] "RemoveContainer" containerID="be9af03cdaac6d198c36c22f6a72da93c6a8876d7522a083b84ad37b4c4205a3" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.768391 4870 scope.go:117] "RemoveContainer" containerID="2f2cc7b48ae21cc95e0db1fb5e108ac03cb5f4ea23904284d0731df4d6012673" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.802925 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.816716 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.829922 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.852380 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863311 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: E0130 08:28:57.863690 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1435e0c6-e24a-44d4-bf78-3e5300e23cdd" containerName="placement-db-sync" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863705 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1435e0c6-e24a-44d4-bf78-3e5300e23cdd" containerName="placement-db-sync" Jan 30 08:28:57 crc kubenswrapper[4870]: E0130 08:28:57.863739 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863745 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api" Jan 30 08:28:57 crc kubenswrapper[4870]: E0130 08:28:57.863757 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api-log" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863764 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api-log" Jan 30 08:28:57 crc kubenswrapper[4870]: E0130 08:28:57.863787 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" containerName="watcher-decision-engine" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863794 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" containerName="watcher-decision-engine" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863971 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" containerName="watcher-decision-engine" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863984 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1435e0c6-e24a-44d4-bf78-3e5300e23cdd" containerName="placement-db-sync" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.863991 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api-log" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.864007 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" containerName="watcher-api" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.864575 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.885774 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.889936 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.910235 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.911907 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.931143 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.931311 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.936130 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.954216 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8628af25-d5e4-46a0-adec-4c25ca39676b-logs\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.954277 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.954351 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.954377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9h65\" (UniqueName: \"kubernetes.io/projected/8628af25-d5e4-46a0-adec-4c25ca39676b-kube-api-access-l9h65\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:57 crc kubenswrapper[4870]: I0130 08:28:57.954414 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.002945 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.064771 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.064837 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.064911 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.064952 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.064974 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/964cd6aa-bebd-412e-bd1c-001d151a90e8-logs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.064994 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9h65\" (UniqueName: \"kubernetes.io/projected/8628af25-d5e4-46a0-adec-4c25ca39676b-kube-api-access-l9h65\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.065021 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l777x\" (UniqueName: \"kubernetes.io/projected/964cd6aa-bebd-412e-bd1c-001d151a90e8-kube-api-access-l777x\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.065040 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.065058 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-config-data\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.065076 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.065112 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8628af25-d5e4-46a0-adec-4c25ca39676b-logs\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.065128 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-public-tls-certs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.075527 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8628af25-d5e4-46a0-adec-4c25ca39676b-logs\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.075620 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.077023 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.080086 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.100090 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f" path="/var/lib/kubelet/pods/8eb0ea94-f1b1-41c4-a968-ff1d4af60e2f/volumes" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.101190 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63b26d0-7049-490c-97dc-117bbbf5fa01" path="/var/lib/kubelet/pods/b63b26d0-7049-490c-97dc-117bbbf5fa01/volumes" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.101825 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9h65\" (UniqueName: \"kubernetes.io/projected/8628af25-d5e4-46a0-adec-4c25ca39676b-kube-api-access-l9h65\") pod \"watcher-decision-engine-0\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.102176 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56cfc8cc98-pfz9w"] Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.103571 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.107699 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.107940 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-skpxp" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.112479 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.112576 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.113281 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.119581 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56cfc8cc98-pfz9w"] Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.216709 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.249795 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-internal-tls-certs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.249862 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-combined-ca-bundle\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.249953 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-public-tls-certs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.250015 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252046 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252088 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-scripts\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252120 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-logs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252177 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-config-data\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/964cd6aa-bebd-412e-bd1c-001d151a90e8-logs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252282 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9sj\" (UniqueName: \"kubernetes.io/projected/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-kube-api-access-mk9sj\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252319 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-public-tls-certs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252381 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l777x\" (UniqueName: \"kubernetes.io/projected/964cd6aa-bebd-412e-bd1c-001d151a90e8-kube-api-access-l777x\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252420 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-config-data\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.252459 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.253858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/964cd6aa-bebd-412e-bd1c-001d151a90e8-logs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.259884 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.259830 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.260826 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.265120 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-config-data\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.275753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cd6aa-bebd-412e-bd1c-001d151a90e8-public-tls-certs\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.288380 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l777x\" (UniqueName: \"kubernetes.io/projected/964cd6aa-bebd-412e-bd1c-001d151a90e8-kube-api-access-l777x\") pod \"watcher-api-0\" (UID: \"964cd6aa-bebd-412e-bd1c-001d151a90e8\") " pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.354922 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-scripts\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.354971 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-logs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.355005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-config-data\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.355056 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9sj\" (UniqueName: \"kubernetes.io/projected/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-kube-api-access-mk9sj\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.355088 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-public-tls-certs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.355146 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-internal-tls-certs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.355177 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-combined-ca-bundle\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.356593 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-logs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.357724 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-scripts\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.358386 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-combined-ca-bundle\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.365409 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-public-tls-certs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.369363 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-config-data\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.376466 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-internal-tls-certs\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.376554 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9sj\" (UniqueName: \"kubernetes.io/projected/a0bafb1e-cef8-4a8c-bb78-a5d11d098691-kube-api-access-mk9sj\") pod \"placement-56cfc8cc98-pfz9w\" (UID: \"a0bafb1e-cef8-4a8c-bb78-a5d11d098691\") " pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.542412 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.552607 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 30 08:28:58 crc kubenswrapper[4870]: I0130 08:28:58.771268 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:28:58 crc kubenswrapper[4870]: W0130 08:28:58.789032 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8628af25_d5e4_46a0_adec_4c25ca39676b.slice/crio-ac51c1c09a44741e0ef19b1f29a0de83b725ade9e973b9a5af1ac06cc507c0dc WatchSource:0}: Error finding container ac51c1c09a44741e0ef19b1f29a0de83b725ade9e973b9a5af1ac06cc507c0dc: Status 404 returned error can't find the container with id ac51c1c09a44741e0ef19b1f29a0de83b725ade9e973b9a5af1ac06cc507c0dc Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.137844 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56cfc8cc98-pfz9w"] Jan 30 08:28:59 crc kubenswrapper[4870]: W0130 08:28:59.145222 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0bafb1e_cef8_4a8c_bb78_a5d11d098691.slice/crio-8af75310cadb631a54fb1863b40ad4d025d0b2489ced96ec922b0c3169cd9a29 WatchSource:0}: Error finding container 8af75310cadb631a54fb1863b40ad4d025d0b2489ced96ec922b0c3169cd9a29: Status 404 returned error can't find the container with id 8af75310cadb631a54fb1863b40ad4d025d0b2489ced96ec922b0c3169cd9a29 Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.246015 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.734974 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-769d7654db-gw44c" Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.784183 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"964cd6aa-bebd-412e-bd1c-001d151a90e8","Type":"ContainerStarted","Data":"0fc045b8752b23ed63b788c40396f5c241006226d574a9afb4b1fad123932dc3"} Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.784547 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"964cd6aa-bebd-412e-bd1c-001d151a90e8","Type":"ContainerStarted","Data":"58f3c594be8eea56316c01512e5ab0263514c3971eb0192f72124a7b62f71734"} Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.790317 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerStarted","Data":"6d57bab3d6c90d46f3ba26f44c864a6ec85718286b3ff7835459214c488726bc"} Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.790355 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerStarted","Data":"ac51c1c09a44741e0ef19b1f29a0de83b725ade9e973b9a5af1ac06cc507c0dc"} Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.817455 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74569d8966-5sjxs"] Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.817730 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74569d8966-5sjxs" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon-log" containerID="cri-o://43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25" gracePeriod=30 Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.817865 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74569d8966-5sjxs" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" containerID="cri-o://960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3" gracePeriod=30 Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.818892 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.829288 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56cfc8cc98-pfz9w" event={"ID":"a0bafb1e-cef8-4a8c-bb78-a5d11d098691","Type":"ContainerStarted","Data":"5593830809dede98478a71c2749bca27e1d4bfb98c7d1db4c427feb53c451e33"} Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.829336 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56cfc8cc98-pfz9w" event={"ID":"a0bafb1e-cef8-4a8c-bb78-a5d11d098691","Type":"ContainerStarted","Data":"8af75310cadb631a54fb1863b40ad4d025d0b2489ced96ec922b0c3169cd9a29"} Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.835769 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74569d8966-5sjxs" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:43982->10.217.0.162:8443: read: connection reset by peer" Jan 30 08:28:59 crc kubenswrapper[4870]: I0130 08:28:59.847362 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.847337795 podStartE2EDuration="2.847337795s" podCreationTimestamp="2026-01-30 08:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:28:59.844472565 +0000 UTC m=+1178.540019684" watchObservedRunningTime="2026-01-30 08:28:59.847337795 +0000 UTC m=+1178.542884904" Jan 30 08:29:00 crc kubenswrapper[4870]: E0130 08:29:00.524064 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:00 crc kubenswrapper[4870]: E0130 08:29:00.526393 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:00 crc kubenswrapper[4870]: E0130 08:29:00.528549 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:00 crc kubenswrapper[4870]: E0130 08:29:00.528675 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.847184 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"964cd6aa-bebd-412e-bd1c-001d151a90e8","Type":"ContainerStarted","Data":"9171df5b2927af9eba44065ea758545f3bfbeb9a8d3fe9faa2aa3e871785a17d"} Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.847516 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.850023 4870 generic.go:334] "Generic (PLEG): container finished" podID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerID="960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3" exitCode=0 Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.850049 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74569d8966-5sjxs" event={"ID":"1872a14d-aeff-46f7-8430-c6fe0eb6973b","Type":"ContainerDied","Data":"960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3"} Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.855636 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56cfc8cc98-pfz9w" event={"ID":"a0bafb1e-cef8-4a8c-bb78-a5d11d098691","Type":"ContainerStarted","Data":"b6151bfe62806a4b4bcfcd0ac7b669915ac2f5e32e8795136f885dfd849e126d"} Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.855713 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.895001 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56cfc8cc98-pfz9w" podStartSLOduration=2.894983941 podStartE2EDuration="2.894983941s" podCreationTimestamp="2026-01-30 08:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:00.892803893 +0000 UTC m=+1179.588351002" watchObservedRunningTime="2026-01-30 08:29:00.894983941 +0000 UTC m=+1179.590531060" Jan 30 08:29:00 crc kubenswrapper[4870]: I0130 08:29:00.902361 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.902339822 podStartE2EDuration="3.902339822s" podCreationTimestamp="2026-01-30 08:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:00.869049316 +0000 UTC m=+1179.564596425" watchObservedRunningTime="2026-01-30 08:29:00.902339822 +0000 UTC m=+1179.597886931" Jan 30 08:29:01 crc kubenswrapper[4870]: I0130 08:29:01.867798 4870 generic.go:334] "Generic (PLEG): container finished" podID="685bde78-dea1-4864-a825-af176178bd11" containerID="c723dc182803022ba9e618ac6407cbccb617a7c5a0a43457386f580c7a154614" exitCode=0 Jan 30 08:29:01 crc kubenswrapper[4870]: I0130 08:29:01.869226 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9g27p" event={"ID":"685bde78-dea1-4864-a825-af176178bd11","Type":"ContainerDied","Data":"c723dc182803022ba9e618ac6407cbccb617a7c5a0a43457386f580c7a154614"} Jan 30 08:29:01 crc kubenswrapper[4870]: I0130 08:29:01.870265 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:29:02 crc kubenswrapper[4870]: I0130 08:29:02.880114 4870 generic.go:334] "Generic (PLEG): container finished" podID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerID="6d57bab3d6c90d46f3ba26f44c864a6ec85718286b3ff7835459214c488726bc" exitCode=1 Jan 30 08:29:02 crc kubenswrapper[4870]: I0130 08:29:02.880165 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerDied","Data":"6d57bab3d6c90d46f3ba26f44c864a6ec85718286b3ff7835459214c488726bc"} Jan 30 08:29:02 crc kubenswrapper[4870]: I0130 08:29:02.882012 4870 scope.go:117] "RemoveContainer" containerID="6d57bab3d6c90d46f3ba26f44c864a6ec85718286b3ff7835459214c488726bc" Jan 30 08:29:03 crc kubenswrapper[4870]: I0130 08:29:03.327269 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 30 08:29:03 crc kubenswrapper[4870]: I0130 08:29:03.553562 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 30 08:29:03 crc kubenswrapper[4870]: I0130 08:29:03.890048 4870 generic.go:334] "Generic (PLEG): container finished" podID="c3bd649e-5c3c-495f-933f-3b516167cbd2" containerID="3e1279140ba8261786354ed9fbacfe2a2a43a2b8decaba7ca7c7b15754ed7ff9" exitCode=0 Jan 30 08:29:03 crc kubenswrapper[4870]: I0130 08:29:03.890788 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2mx7" event={"ID":"c3bd649e-5c3c-495f-933f-3b516167cbd2","Type":"ContainerDied","Data":"3e1279140ba8261786354ed9fbacfe2a2a43a2b8decaba7ca7c7b15754ed7ff9"} Jan 30 08:29:04 crc kubenswrapper[4870]: I0130 08:29:04.711699 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74569d8966-5sjxs" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 30 08:29:06 crc kubenswrapper[4870]: E0130 08:29:05.522045 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:06 crc kubenswrapper[4870]: E0130 08:29:05.523346 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:06 crc kubenswrapper[4870]: E0130 08:29:05.524557 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:06 crc kubenswrapper[4870]: E0130 08:29:05.524591 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:06 crc kubenswrapper[4870]: I0130 08:29:06.928560 4870 generic.go:334] "Generic (PLEG): container finished" podID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerID="8001ca067558561639146186888f3fefa9a3f66b8cfe6da27c20754262532feb" exitCode=137 Jan 30 08:29:06 crc kubenswrapper[4870]: I0130 08:29:06.928942 4870 generic.go:334] "Generic (PLEG): container finished" podID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerID="57f324a7af1ed982422d86d531cc1073fc7e06667530fd2daf47081062016e35" exitCode=137 Jan 30 08:29:06 crc kubenswrapper[4870]: I0130 08:29:06.928980 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5949fbc84f-vdxjp" event={"ID":"4171155c-1d8c-48a0-9675-1c730f9130dc","Type":"ContainerDied","Data":"8001ca067558561639146186888f3fefa9a3f66b8cfe6da27c20754262532feb"} Jan 30 08:29:06 crc kubenswrapper[4870]: I0130 08:29:06.929019 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5949fbc84f-vdxjp" event={"ID":"4171155c-1d8c-48a0-9675-1c730f9130dc","Type":"ContainerDied","Data":"57f324a7af1ed982422d86d531cc1073fc7e06667530fd2daf47081062016e35"} Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.123017 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.126354 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9g27p" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.179707 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-combined-ca-bundle\") pod \"c3bd649e-5c3c-495f-933f-3b516167cbd2\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.179836 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-db-sync-config-data\") pod \"c3bd649e-5c3c-495f-933f-3b516167cbd2\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.179949 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-combined-ca-bundle\") pod \"685bde78-dea1-4864-a825-af176178bd11\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.179979 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-config-data\") pod \"685bde78-dea1-4864-a825-af176178bd11\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.180036 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmq48\" (UniqueName: \"kubernetes.io/projected/c3bd649e-5c3c-495f-933f-3b516167cbd2-kube-api-access-bmq48\") pod \"c3bd649e-5c3c-495f-933f-3b516167cbd2\" (UID: \"c3bd649e-5c3c-495f-933f-3b516167cbd2\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.180077 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-db-sync-config-data\") pod \"685bde78-dea1-4864-a825-af176178bd11\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.180095 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685bde78-dea1-4864-a825-af176178bd11-etc-machine-id\") pod \"685bde78-dea1-4864-a825-af176178bd11\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.180126 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-scripts\") pod \"685bde78-dea1-4864-a825-af176178bd11\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.180148 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgl2s\" (UniqueName: \"kubernetes.io/projected/685bde78-dea1-4864-a825-af176178bd11-kube-api-access-lgl2s\") pod \"685bde78-dea1-4864-a825-af176178bd11\" (UID: \"685bde78-dea1-4864-a825-af176178bd11\") " Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.180827 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/685bde78-dea1-4864-a825-af176178bd11-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "685bde78-dea1-4864-a825-af176178bd11" (UID: "685bde78-dea1-4864-a825-af176178bd11"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.196330 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "685bde78-dea1-4864-a825-af176178bd11" (UID: "685bde78-dea1-4864-a825-af176178bd11"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.196343 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-scripts" (OuterVolumeSpecName: "scripts") pod "685bde78-dea1-4864-a825-af176178bd11" (UID: "685bde78-dea1-4864-a825-af176178bd11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.196369 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c3bd649e-5c3c-495f-933f-3b516167cbd2" (UID: "c3bd649e-5c3c-495f-933f-3b516167cbd2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.196409 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bd649e-5c3c-495f-933f-3b516167cbd2-kube-api-access-bmq48" (OuterVolumeSpecName: "kube-api-access-bmq48") pod "c3bd649e-5c3c-495f-933f-3b516167cbd2" (UID: "c3bd649e-5c3c-495f-933f-3b516167cbd2"). InnerVolumeSpecName "kube-api-access-bmq48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.196471 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685bde78-dea1-4864-a825-af176178bd11-kube-api-access-lgl2s" (OuterVolumeSpecName: "kube-api-access-lgl2s") pod "685bde78-dea1-4864-a825-af176178bd11" (UID: "685bde78-dea1-4864-a825-af176178bd11"). InnerVolumeSpecName "kube-api-access-lgl2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.211736 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "685bde78-dea1-4864-a825-af176178bd11" (UID: "685bde78-dea1-4864-a825-af176178bd11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.225751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3bd649e-5c3c-495f-933f-3b516167cbd2" (UID: "c3bd649e-5c3c-495f-933f-3b516167cbd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.252409 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-config-data" (OuterVolumeSpecName: "config-data") pod "685bde78-dea1-4864-a825-af176178bd11" (UID: "685bde78-dea1-4864-a825-af176178bd11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286503 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286548 4870 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685bde78-dea1-4864-a825-af176178bd11-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286560 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286575 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgl2s\" (UniqueName: \"kubernetes.io/projected/685bde78-dea1-4864-a825-af176178bd11-kube-api-access-lgl2s\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286589 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286601 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3bd649e-5c3c-495f-933f-3b516167cbd2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286611 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286621 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685bde78-dea1-4864-a825-af176178bd11-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.286632 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmq48\" (UniqueName: \"kubernetes.io/projected/c3bd649e-5c3c-495f-933f-3b516167cbd2-kube-api-access-bmq48\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.939204 4870 generic.go:334] "Generic (PLEG): container finished" podID="505df376-c8bc-44ce-9c14-8cf94730c550" containerID="51fd04d1413a7bb8dd1010fcf50ab478d7211a73c87542e70aaae3ce82cc9053" exitCode=0 Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.939509 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mjj4" event={"ID":"505df376-c8bc-44ce-9c14-8cf94730c550","Type":"ContainerDied","Data":"51fd04d1413a7bb8dd1010fcf50ab478d7211a73c87542e70aaae3ce82cc9053"} Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.943180 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9g27p" event={"ID":"685bde78-dea1-4864-a825-af176178bd11","Type":"ContainerDied","Data":"208d52cf2eb05b40a226e4a1738a5480e549f0f08e4a6ad6b02178c49de677f7"} Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.943222 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208d52cf2eb05b40a226e4a1738a5480e549f0f08e4a6ad6b02178c49de677f7" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.943277 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9g27p" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.947163 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2mx7" event={"ID":"c3bd649e-5c3c-495f-933f-3b516167cbd2","Type":"ContainerDied","Data":"545706fcd20ca2bc6bb776e0ab7efebb3759d48b4ad6e3c5ac851eb2e476dd66"} Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.947206 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="545706fcd20ca2bc6bb776e0ab7efebb3759d48b4ad6e3c5ac851eb2e476dd66" Jan 30 08:29:07 crc kubenswrapper[4870]: I0130 08:29:07.947269 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2mx7" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.218243 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.218295 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.413988 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:08 crc kubenswrapper[4870]: E0130 08:29:08.414624 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685bde78-dea1-4864-a825-af176178bd11" containerName="cinder-db-sync" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.414645 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="685bde78-dea1-4864-a825-af176178bd11" containerName="cinder-db-sync" Jan 30 08:29:08 crc kubenswrapper[4870]: E0130 08:29:08.414657 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bd649e-5c3c-495f-933f-3b516167cbd2" containerName="barbican-db-sync" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.414664 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bd649e-5c3c-495f-933f-3b516167cbd2" containerName="barbican-db-sync" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.415178 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bd649e-5c3c-495f-933f-3b516167cbd2" containerName="barbican-db-sync" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.415199 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="685bde78-dea1-4864-a825-af176178bd11" containerName="cinder-db-sync" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.416302 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.438463 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.438668 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b94ff658f-bmntr"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.443699 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4blb4" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.444109 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.457983 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.463216 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.470635 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.471004 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qmdf5" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.471288 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.498892 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508369 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm55z\" (UniqueName: \"kubernetes.io/projected/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-kube-api-access-qm55z\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508474 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-logs\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508501 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-config-data\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508522 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f839b4e9-f9f0-489d-b04b-14b03ab6895b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508561 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508574 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-config-data-custom\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508602 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508619 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcs4s\" (UniqueName: \"kubernetes.io/projected/f839b4e9-f9f0-489d-b04b-14b03ab6895b-kube-api-access-wcs4s\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508652 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-scripts\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508669 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.508695 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-combined-ca-bundle\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.524956 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b94ff658f-bmntr"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.551755 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54fb8bddb6-w78xn"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.563213 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.563777 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.571760 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.623780 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54fb8bddb6-w78xn"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.629957 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630111 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcs4s\" (UniqueName: \"kubernetes.io/projected/f839b4e9-f9f0-489d-b04b-14b03ab6895b-kube-api-access-wcs4s\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630222 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-scripts\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630272 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-combined-ca-bundle\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630319 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630349 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrrs\" (UniqueName: \"kubernetes.io/projected/8a32795f-6328-4d51-a69a-60be965b17f0-kube-api-access-ncrrs\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-config-data-custom\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630441 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-combined-ca-bundle\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630535 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm55z\" (UniqueName: \"kubernetes.io/projected/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-kube-api-access-qm55z\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630636 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a32795f-6328-4d51-a69a-60be965b17f0-logs\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.630672 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-config-data\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.632730 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-logs\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.632771 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-config-data\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.634017 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f839b4e9-f9f0-489d-b04b-14b03ab6895b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.635425 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f839b4e9-f9f0-489d-b04b-14b03ab6895b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.639052 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-logs\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.639164 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.639189 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-config-data-custom\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.648678 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-scripts\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.649157 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-config-data\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.649697 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.650079 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-config-data-custom\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.650320 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.652241 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.652595 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.664988 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95c8f6689-d4pfh"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.666553 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.670770 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-combined-ca-bundle\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.681537 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcs4s\" (UniqueName: \"kubernetes.io/projected/f839b4e9-f9f0-489d-b04b-14b03ab6895b-kube-api-access-wcs4s\") pod \"cinder-scheduler-0\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.684644 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm55z\" (UniqueName: \"kubernetes.io/projected/a3bc44ff-bc04-4e44-bb13-ff62f43057f5-kube-api-access-qm55z\") pod \"barbican-worker-6b94ff658f-bmntr\" (UID: \"a3bc44ff-bc04-4e44-bb13-ff62f43057f5\") " pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.720919 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95c8f6689-d4pfh"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.741572 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-svc\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.741851 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2968w\" (UniqueName: \"kubernetes.io/projected/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-kube-api-access-2968w\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.741931 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-combined-ca-bundle\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.741954 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrrs\" (UniqueName: \"kubernetes.io/projected/8a32795f-6328-4d51-a69a-60be965b17f0-kube-api-access-ncrrs\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.741999 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-sb\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.742018 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-config-data-custom\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.742101 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-nb\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.745801 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a32795f-6328-4d51-a69a-60be965b17f0-logs\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.745916 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-config-data\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.746058 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-swift-storage-0\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.746099 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-config\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.747833 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a32795f-6328-4d51-a69a-60be965b17f0-logs\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.760843 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-config-data\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.762405 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-combined-ca-bundle\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.770889 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a32795f-6328-4d51-a69a-60be965b17f0-config-data-custom\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.771317 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.777225 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95c8f6689-d4pfh"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.801517 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrrs\" (UniqueName: \"kubernetes.io/projected/8a32795f-6328-4d51-a69a-60be965b17f0-kube-api-access-ncrrs\") pod \"barbican-keystone-listener-54fb8bddb6-w78xn\" (UID: \"8a32795f-6328-4d51-a69a-60be965b17f0\") " pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.812626 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b94ff658f-bmntr" Jan 30 08:29:08 crc kubenswrapper[4870]: E0130 08:29:08.852328 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-2968w ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" podUID="a59ee6c0-d68e-4e31-bf9e-1326d91c0633" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.867914 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2968w\" (UniqueName: \"kubernetes.io/projected/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-kube-api-access-2968w\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.868300 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-sb\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.869101 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-sb\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.869760 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-nb\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.869803 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-nb\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.869921 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-swift-storage-0\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.869960 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-config\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.870442 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-swift-storage-0\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.870952 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-config\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.871117 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-svc\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.874352 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-svc\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.880275 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-drvjx"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.881912 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.910065 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-ddfdbf76d-mfqhx"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.911811 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.914151 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.919731 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2968w\" (UniqueName: \"kubernetes.io/projected/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-kube-api-access-2968w\") pod \"dnsmasq-dns-95c8f6689-d4pfh\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.920404 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.939956 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-drvjx"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.959325 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ddfdbf76d-mfqhx"] Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.991399 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgd6n\" (UniqueName: \"kubernetes.io/projected/c58c61f5-64cf-4fe3-9792-a9d7b0987188-kube-api-access-sgd6n\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.991507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-config\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.991526 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-swift-storage-0\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.991546 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-svc\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.991638 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-sb\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:08 crc kubenswrapper[4870]: I0130 08:29:08.991669 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-nb\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.024260 4870 generic.go:334] "Generic (PLEG): container finished" podID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" containerID="85f1049088e388e69d6da33f4eab9143943bc4d4ba2179d9093657152d474310" exitCode=0 Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.024493 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tssp8" event={"ID":"edd09a42-14b6-4161-ba2a-82c4cf4f5983","Type":"ContainerDied","Data":"85f1049088e388e69d6da33f4eab9143943bc4d4ba2179d9093657152d474310"} Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.025786 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.028366 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.032097 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.041710 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.073057 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.080795 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.092826 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgd6n\" (UniqueName: \"kubernetes.io/projected/c58c61f5-64cf-4fe3-9792-a9d7b0987188-kube-api-access-sgd6n\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093218 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-config\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093241 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-swift-storage-0\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093257 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-svc\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093281 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093345 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data-custom\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093371 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a4731e-3232-4d27-acf8-9d34ee7570a7-logs\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093414 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-sb\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093439 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-combined-ca-bundle\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093478 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdrz\" (UniqueName: \"kubernetes.io/projected/57a4731e-3232-4d27-acf8-9d34ee7570a7-kube-api-access-2qdrz\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.093504 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-nb\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.094431 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-nb\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.094746 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-config\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.095002 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-sb\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.095482 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-swift-storage-0\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.095601 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-svc\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.132923 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgd6n\" (UniqueName: \"kubernetes.io/projected/c58c61f5-64cf-4fe3-9792-a9d7b0987188-kube-api-access-sgd6n\") pod \"dnsmasq-dns-76ddf7d98c-drvjx\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.201374 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-svc\") pod \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.201470 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-swift-storage-0\") pod \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.201561 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-nb\") pod \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.201643 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-sb\") pod \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.201700 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2968w\" (UniqueName: \"kubernetes.io/projected/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-kube-api-access-2968w\") pod \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.201783 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-config\") pod \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\" (UID: \"a59ee6c0-d68e-4e31-bf9e-1326d91c0633\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202050 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qdrz\" (UniqueName: \"kubernetes.io/projected/57a4731e-3232-4d27-acf8-9d34ee7570a7-kube-api-access-2qdrz\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202090 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202173 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-scripts\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202193 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtln\" (UniqueName: \"kubernetes.io/projected/2c1333f8-2564-4b5c-84b9-0045d742c45f-kube-api-access-xhtln\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202228 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202244 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202283 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202327 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1333f8-2564-4b5c-84b9-0045d742c45f-logs\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202346 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c1333f8-2564-4b5c-84b9-0045d742c45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202363 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data-custom\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202383 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a4731e-3232-4d27-acf8-9d34ee7570a7-logs\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.202416 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-combined-ca-bundle\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.203850 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-config" (OuterVolumeSpecName: "config") pod "a59ee6c0-d68e-4e31-bf9e-1326d91c0633" (UID: "a59ee6c0-d68e-4e31-bf9e-1326d91c0633"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.204098 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a59ee6c0-d68e-4e31-bf9e-1326d91c0633" (UID: "a59ee6c0-d68e-4e31-bf9e-1326d91c0633"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.204307 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a59ee6c0-d68e-4e31-bf9e-1326d91c0633" (UID: "a59ee6c0-d68e-4e31-bf9e-1326d91c0633"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.204366 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a59ee6c0-d68e-4e31-bf9e-1326d91c0633" (UID: "a59ee6c0-d68e-4e31-bf9e-1326d91c0633"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.204752 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a59ee6c0-d68e-4e31-bf9e-1326d91c0633" (UID: "a59ee6c0-d68e-4e31-bf9e-1326d91c0633"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.206269 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a4731e-3232-4d27-acf8-9d34ee7570a7-logs\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.208063 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-kube-api-access-2968w" (OuterVolumeSpecName: "kube-api-access-2968w") pod "a59ee6c0-d68e-4e31-bf9e-1326d91c0633" (UID: "a59ee6c0-d68e-4e31-bf9e-1326d91c0633"). InnerVolumeSpecName "kube-api-access-2968w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.214236 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.221724 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data-custom\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.238539 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-combined-ca-bundle\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.240407 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qdrz\" (UniqueName: \"kubernetes.io/projected/57a4731e-3232-4d27-acf8-9d34ee7570a7-kube-api-access-2qdrz\") pod \"barbican-api-ddfdbf76d-mfqhx\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.244741 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.257751 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.259223 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.303654 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-scripts\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.303696 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhtln\" (UniqueName: \"kubernetes.io/projected/2c1333f8-2564-4b5c-84b9-0045d742c45f-kube-api-access-xhtln\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.303735 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.303754 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.303806 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1333f8-2564-4b5c-84b9-0045d742c45f-logs\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.303826 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c1333f8-2564-4b5c-84b9-0045d742c45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.304605 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1333f8-2564-4b5c-84b9-0045d742c45f-logs\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.308265 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.311567 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c1333f8-2564-4b5c-84b9-0045d742c45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312316 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312535 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2968w\" (UniqueName: \"kubernetes.io/projected/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-kube-api-access-2968w\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312549 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312559 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312568 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312577 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.312587 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a59ee6c0-d68e-4e31-bf9e-1326d91c0633-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.317376 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.318517 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.326342 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-scripts\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.344129 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhtln\" (UniqueName: \"kubernetes.io/projected/2c1333f8-2564-4b5c-84b9-0045d742c45f-kube-api-access-xhtln\") pod \"cinder-api-0\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.397952 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.686353 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.697742 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.820950 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-config-data\") pod \"4171155c-1d8c-48a0-9675-1c730f9130dc\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821035 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-config\") pod \"505df376-c8bc-44ce-9c14-8cf94730c550\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821365 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-scripts\") pod \"4171155c-1d8c-48a0-9675-1c730f9130dc\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821395 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4171155c-1d8c-48a0-9675-1c730f9130dc-horizon-secret-key\") pod \"4171155c-1d8c-48a0-9675-1c730f9130dc\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821426 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwsc\" (UniqueName: \"kubernetes.io/projected/505df376-c8bc-44ce-9c14-8cf94730c550-kube-api-access-7rwsc\") pod \"505df376-c8bc-44ce-9c14-8cf94730c550\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821452 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-combined-ca-bundle\") pod \"505df376-c8bc-44ce-9c14-8cf94730c550\" (UID: \"505df376-c8bc-44ce-9c14-8cf94730c550\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821518 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj77z\" (UniqueName: \"kubernetes.io/projected/4171155c-1d8c-48a0-9675-1c730f9130dc-kube-api-access-xj77z\") pod \"4171155c-1d8c-48a0-9675-1c730f9130dc\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.821613 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4171155c-1d8c-48a0-9675-1c730f9130dc-logs\") pod \"4171155c-1d8c-48a0-9675-1c730f9130dc\" (UID: \"4171155c-1d8c-48a0-9675-1c730f9130dc\") " Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.822407 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4171155c-1d8c-48a0-9675-1c730f9130dc-logs" (OuterVolumeSpecName: "logs") pod "4171155c-1d8c-48a0-9675-1c730f9130dc" (UID: "4171155c-1d8c-48a0-9675-1c730f9130dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.830944 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4171155c-1d8c-48a0-9675-1c730f9130dc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4171155c-1d8c-48a0-9675-1c730f9130dc" (UID: "4171155c-1d8c-48a0-9675-1c730f9130dc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.837413 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4171155c-1d8c-48a0-9675-1c730f9130dc-kube-api-access-xj77z" (OuterVolumeSpecName: "kube-api-access-xj77z") pod "4171155c-1d8c-48a0-9675-1c730f9130dc" (UID: "4171155c-1d8c-48a0-9675-1c730f9130dc"). InnerVolumeSpecName "kube-api-access-xj77z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.834309 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505df376-c8bc-44ce-9c14-8cf94730c550-kube-api-access-7rwsc" (OuterVolumeSpecName: "kube-api-access-7rwsc") pod "505df376-c8bc-44ce-9c14-8cf94730c550" (UID: "505df376-c8bc-44ce-9c14-8cf94730c550"). InnerVolumeSpecName "kube-api-access-7rwsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.849215 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-config-data" (OuterVolumeSpecName: "config-data") pod "4171155c-1d8c-48a0-9675-1c730f9130dc" (UID: "4171155c-1d8c-48a0-9675-1c730f9130dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.856836 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-config" (OuterVolumeSpecName: "config") pod "505df376-c8bc-44ce-9c14-8cf94730c550" (UID: "505df376-c8bc-44ce-9c14-8cf94730c550"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.861829 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505df376-c8bc-44ce-9c14-8cf94730c550" (UID: "505df376-c8bc-44ce-9c14-8cf94730c550"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.869812 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-scripts" (OuterVolumeSpecName: "scripts") pod "4171155c-1d8c-48a0-9675-1c730f9130dc" (UID: "4171155c-1d8c-48a0-9675-1c730f9130dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923170 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923197 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923210 4870 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4171155c-1d8c-48a0-9675-1c730f9130dc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923219 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwsc\" (UniqueName: \"kubernetes.io/projected/505df376-c8bc-44ce-9c14-8cf94730c550-kube-api-access-7rwsc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923228 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505df376-c8bc-44ce-9c14-8cf94730c550-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923394 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj77z\" (UniqueName: \"kubernetes.io/projected/4171155c-1d8c-48a0-9675-1c730f9130dc-kube-api-access-xj77z\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923402 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4171155c-1d8c-48a0-9675-1c730f9130dc-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:09 crc kubenswrapper[4870]: I0130 08:29:09.923410 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4171155c-1d8c-48a0-9675-1c730f9130dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.124124 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mjj4" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.137234 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95c8f6689-d4pfh" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.137263 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5949fbc84f-vdxjp" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.278309 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b94ff658f-bmntr"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.278641 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-drvjx"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.278656 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-656959885f-8m9f8"] Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.278962 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505df376-c8bc-44ce-9c14-8cf94730c550" containerName="neutron-db-sync" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.278976 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="505df376-c8bc-44ce-9c14-8cf94730c550" containerName="neutron-db-sync" Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.279003 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon-log" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.279009 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon-log" Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.279023 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.279031 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.279180 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="505df376-c8bc-44ce-9c14-8cf94730c550" containerName="neutron-db-sync" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.279205 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.279214 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" containerName="horizon-log" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.280532 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mjj4" event={"ID":"505df376-c8bc-44ce-9c14-8cf94730c550","Type":"ContainerDied","Data":"171c5c07c9fb91c243425bc5e80be08d88ca8fe65555c57f93bf344f77f94faf"} Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.280566 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171c5c07c9fb91c243425bc5e80be08d88ca8fe65555c57f93bf344f77f94faf" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.280576 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5949fbc84f-vdxjp" event={"ID":"4171155c-1d8c-48a0-9675-1c730f9130dc","Type":"ContainerDied","Data":"3cd56b7866047c6a3192e1f27cec1f489317b2b2d8b6d0b806475464e27ca26f"} Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.280597 4870 scope.go:117] "RemoveContainer" containerID="8001ca067558561639146186888f3fefa9a3f66b8cfe6da27c20754262532feb" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.280750 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.289824 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656959885f-8m9f8"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.433756 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-svc\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.433793 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-config\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.433847 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-sb\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.433868 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-nb\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.433964 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47htr\" (UniqueName: \"kubernetes.io/projected/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-kube-api-access-47htr\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.434010 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-swift-storage-0\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.532373 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.535189 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-sb\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.535223 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-nb\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.535390 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47htr\" (UniqueName: \"kubernetes.io/projected/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-kube-api-access-47htr\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.535444 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-swift-storage-0\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.535475 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-svc\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.535490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-config\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.536384 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-swift-storage-0\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.536572 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-sb\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.536992 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-nb\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.538961 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-config\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.539572 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-svc\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.545360 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.555641 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:10 crc kubenswrapper[4870]: E0130 08:29:10.556028 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.570706 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47htr\" (UniqueName: \"kubernetes.io/projected/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-kube-api-access-47htr\") pod \"dnsmasq-dns-656959885f-8m9f8\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.591288 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f966fd88d-sdpcn"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.593091 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.596442 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.596701 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.596719 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.596760 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nnfmm" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.606665 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95c8f6689-d4pfh"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.621388 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95c8f6689-d4pfh"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.630007 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f966fd88d-sdpcn"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.642966 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5949fbc84f-vdxjp"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.652276 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5949fbc84f-vdxjp"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.698462 4870 scope.go:117] "RemoveContainer" containerID="57f324a7af1ed982422d86d531cc1073fc7e06667530fd2daf47081062016e35" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.743691 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-ovndb-tls-certs\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.743742 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xs7\" (UniqueName: \"kubernetes.io/projected/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-kube-api-access-z8xs7\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.743787 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-combined-ca-bundle\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.744068 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-httpd-config\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.744153 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-config\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.816974 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.845873 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-httpd-config\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.845942 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-config\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.846009 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-ovndb-tls-certs\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.846053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8xs7\" (UniqueName: \"kubernetes.io/projected/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-kube-api-access-z8xs7\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.846103 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-combined-ca-bundle\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.859508 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.862101 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-combined-ca-bundle\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.862574 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-ovndb-tls-certs\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.865219 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-config\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.869147 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-httpd-config\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.887774 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.907652 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.917215 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8xs7\" (UniqueName: \"kubernetes.io/projected/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-kube-api-access-z8xs7\") pod \"neutron-f966fd88d-sdpcn\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:10 crc kubenswrapper[4870]: I0130 08:29:10.934954 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.171963 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b94ff658f-bmntr" event={"ID":"a3bc44ff-bc04-4e44-bb13-ff62f43057f5","Type":"ContainerStarted","Data":"d8b67a8dbdaa2aa5ce60c6293caf9259100b637b6095951bf5eb97842cd01f10"} Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.174195 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerStarted","Data":"1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42"} Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.175508 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c1333f8-2564-4b5c-84b9-0045d742c45f","Type":"ContainerStarted","Data":"5eca3df12794c5b43fdb77c898c9bd28c39f3103bd50eb3571fc088c025d0cf9"} Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.179149 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f839b4e9-f9f0-489d-b04b-14b03ab6895b","Type":"ContainerStarted","Data":"104aebc0ebfaddceac864d04e025d33e5cf5c1cacba0ac14afe67f72e174ead6"} Jan 30 08:29:11 crc kubenswrapper[4870]: E0130 08:29:11.189631 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0: Get \\\"http://38.102.83.23:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:bd27ff135622ee80d6d6693f9c0bf8e444ef41832cda5564c2025ca13b50eaf0\\\": context canceled\"" pod="openstack/ceilometer-0" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.336344 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tssp8" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.352973 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ddfdbf76d-mfqhx"] Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.407950 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-drvjx"] Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.424075 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54fb8bddb6-w78xn"] Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.479560 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-db-sync-config-data\") pod \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.479730 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-combined-ca-bundle\") pod \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.479785 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-config-data\") pod \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.479821 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n8n8\" (UniqueName: \"kubernetes.io/projected/edd09a42-14b6-4161-ba2a-82c4cf4f5983-kube-api-access-7n8n8\") pod \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\" (UID: \"edd09a42-14b6-4161-ba2a-82c4cf4f5983\") " Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.551735 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd09a42-14b6-4161-ba2a-82c4cf4f5983-kube-api-access-7n8n8" (OuterVolumeSpecName: "kube-api-access-7n8n8") pod "edd09a42-14b6-4161-ba2a-82c4cf4f5983" (UID: "edd09a42-14b6-4161-ba2a-82c4cf4f5983"). InnerVolumeSpecName "kube-api-access-7n8n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.584332 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n8n8\" (UniqueName: \"kubernetes.io/projected/edd09a42-14b6-4161-ba2a-82c4cf4f5983-kube-api-access-7n8n8\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.595130 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd09a42-14b6-4161-ba2a-82c4cf4f5983" (UID: "edd09a42-14b6-4161-ba2a-82c4cf4f5983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.604056 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "edd09a42-14b6-4161-ba2a-82c4cf4f5983" (UID: "edd09a42-14b6-4161-ba2a-82c4cf4f5983"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.615513 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656959885f-8m9f8"] Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.650935 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-config-data" (OuterVolumeSpecName: "config-data") pod "edd09a42-14b6-4161-ba2a-82c4cf4f5983" (UID: "edd09a42-14b6-4161-ba2a-82c4cf4f5983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.688109 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.688146 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.688175 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edd09a42-14b6-4161-ba2a-82c4cf4f5983-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:11 crc kubenswrapper[4870]: I0130 08:29:11.975740 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f966fd88d-sdpcn"] Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.128347 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4171155c-1d8c-48a0-9675-1c730f9130dc" path="/var/lib/kubelet/pods/4171155c-1d8c-48a0-9675-1c730f9130dc/volumes" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.129086 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59ee6c0-d68e-4e31-bf9e-1326d91c0633" path="/var/lib/kubelet/pods/a59ee6c0-d68e-4e31-bf9e-1326d91c0633/volumes" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.217722 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" event={"ID":"8a32795f-6328-4d51-a69a-60be965b17f0","Type":"ContainerStarted","Data":"8fc75aa3cc115bc611e81fa24f5880ce7c89467cc1852eadde5f662ffbd94434"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.221059 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ddfdbf76d-mfqhx" event={"ID":"57a4731e-3232-4d27-acf8-9d34ee7570a7","Type":"ContainerStarted","Data":"4bfee3b8db156e8f632e1b810fed5fff1f01f89361fe371e88524396b0b3e740"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.221093 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ddfdbf76d-mfqhx" event={"ID":"57a4731e-3232-4d27-acf8-9d34ee7570a7","Type":"ContainerStarted","Data":"4fefb5421067779e4ffb7448501feacbbd8e1262345c29ebcf35ade1e4bf9f85"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.229127 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerStarted","Data":"c24e43cc0a18413bd641a93cefd91035b18efb88407c422b061290231df7fca8"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.229260 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="ceilometer-notification-agent" containerID="cri-o://b45a58a1e3e4865b397313616e6494da5d8e1887dd9401a657303e526b984274" gracePeriod=30 Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.229338 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.230225 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="proxy-httpd" containerID="cri-o://c24e43cc0a18413bd641a93cefd91035b18efb88407c422b061290231df7fca8" gracePeriod=30 Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.230661 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="sg-core" containerID="cri-o://eb3296110237841074b9a042e7cdf380a70fa9819f62258f821b5124b40eb835" gracePeriod=30 Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.234114 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f966fd88d-sdpcn" event={"ID":"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88","Type":"ContainerStarted","Data":"cfedb81ba7d9e195fe41ff8a768c117183039fb6da240c26ec467012add1460c"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.237993 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656959885f-8m9f8" event={"ID":"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa","Type":"ContainerStarted","Data":"27ed939972c481c8d851e169e12137d6f31939cc5f5f7a13dd1d93784b1b442d"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.247549 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tssp8" event={"ID":"edd09a42-14b6-4161-ba2a-82c4cf4f5983","Type":"ContainerDied","Data":"8bfad17f6d235c11635a3d5c597e4e8cad4341b4b72906e827a01ca540cffaac"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.247586 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bfad17f6d235c11635a3d5c597e4e8cad4341b4b72906e827a01ca540cffaac" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.247646 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tssp8" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.255579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" event={"ID":"c58c61f5-64cf-4fe3-9792-a9d7b0987188","Type":"ContainerStarted","Data":"0d8b1a72f03d9ad69f8ac9e36fd394f650e5d7b9d43ccffdaff2ad1160e3aeef"} Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.808255 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656959885f-8m9f8"] Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.851970 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-zzkss"] Jan 30 08:29:12 crc kubenswrapper[4870]: E0130 08:29:12.856101 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" containerName="glance-db-sync" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.856131 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" containerName="glance-db-sync" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.867958 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" containerName="glance-db-sync" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.870877 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:12 crc kubenswrapper[4870]: I0130 08:29:12.889431 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-zzkss"] Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.051376 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-swift-storage-0\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.052586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-sb\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.052712 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-nb\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.052781 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpwr\" (UniqueName: \"kubernetes.io/projected/3688605b-306e-4093-93d5-b96cae2a80de-kube-api-access-6xpwr\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.052904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-config\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.052987 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-svc\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.154381 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-config\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.154428 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-svc\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.154515 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-swift-storage-0\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.154552 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-sb\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.154591 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-nb\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.154605 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpwr\" (UniqueName: \"kubernetes.io/projected/3688605b-306e-4093-93d5-b96cae2a80de-kube-api-access-6xpwr\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.156373 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-config\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.156392 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-svc\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.157189 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-sb\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.157916 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-nb\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.158614 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-swift-storage-0\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.176750 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpwr\" (UniqueName: \"kubernetes.io/projected/3688605b-306e-4093-93d5-b96cae2a80de-kube-api-access-6xpwr\") pod \"dnsmasq-dns-5d675956bc-zzkss\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.215866 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.295299 4870 generic.go:334] "Generic (PLEG): container finished" podID="9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" containerID="34ec090b045bffd5d1230bb03e6f1ad024b19b6f7c9f069106bed74b50df542e" exitCode=0 Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.295373 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656959885f-8m9f8" event={"ID":"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa","Type":"ContainerDied","Data":"34ec090b045bffd5d1230bb03e6f1ad024b19b6f7c9f069106bed74b50df542e"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.311114 4870 generic.go:334] "Generic (PLEG): container finished" podID="c58c61f5-64cf-4fe3-9792-a9d7b0987188" containerID="716b6800404b937b1b37ac9d66dd42946bf82bd2065094be653871d4c6645e5a" exitCode=0 Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.311192 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" event={"ID":"c58c61f5-64cf-4fe3-9792-a9d7b0987188","Type":"ContainerDied","Data":"716b6800404b937b1b37ac9d66dd42946bf82bd2065094be653871d4c6645e5a"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.339448 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c1333f8-2564-4b5c-84b9-0045d742c45f","Type":"ContainerStarted","Data":"135b22419515f0c37fa07d5cae62ea43515dc6460498408338173f7df4e2361b"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.384315 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f839b4e9-f9f0-489d-b04b-14b03ab6895b","Type":"ContainerStarted","Data":"43e89a562f59d4e0caa713b4d3d7d10459a54c334d8bf93738ef4bfb17bc36b1"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.442175 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ddfdbf76d-mfqhx" event={"ID":"57a4731e-3232-4d27-acf8-9d34ee7570a7","Type":"ContainerStarted","Data":"9a536688d050dc6432091788f5363412218a7bb425a3e180118973e359516afe"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.442260 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.442281 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.464133 4870 generic.go:334] "Generic (PLEG): container finished" podID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerID="c24e43cc0a18413bd641a93cefd91035b18efb88407c422b061290231df7fca8" exitCode=0 Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.464189 4870 generic.go:334] "Generic (PLEG): container finished" podID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerID="eb3296110237841074b9a042e7cdf380a70fa9819f62258f821b5124b40eb835" exitCode=2 Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.464164 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerDied","Data":"c24e43cc0a18413bd641a93cefd91035b18efb88407c422b061290231df7fca8"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.464364 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerDied","Data":"eb3296110237841074b9a042e7cdf380a70fa9819f62258f821b5124b40eb835"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.486254 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f966fd88d-sdpcn" event={"ID":"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88","Type":"ContainerStarted","Data":"8e106b0c6b2ed513250f13c043895b69dbe1cd77d36b5ecd4e47e2f2226b112e"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.486298 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f966fd88d-sdpcn" event={"ID":"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88","Type":"ContainerStarted","Data":"b1d3aae9bf64c7d5adb6c7a0c0cef4cbd05ceee79bf2bdc26b1676e0ef8ac7ff"} Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.487412 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.491637 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podStartSLOduration=5.491618924 podStartE2EDuration="5.491618924s" podCreationTimestamp="2026-01-30 08:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:13.475295252 +0000 UTC m=+1192.170842361" watchObservedRunningTime="2026-01-30 08:29:13.491618924 +0000 UTC m=+1192.187166033" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.524906 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f966fd88d-sdpcn" podStartSLOduration=3.52488899 podStartE2EDuration="3.52488899s" podCreationTimestamp="2026-01-30 08:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:13.518287393 +0000 UTC m=+1192.213834502" watchObservedRunningTime="2026-01-30 08:29:13.52488899 +0000 UTC m=+1192.220436089" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.849590 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.855446 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.859434 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-58ht6" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.860450 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.860975 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.889018 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.978991 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.980791 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982569 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982628 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982658 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-logs\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982769 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982831 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfc5p\" (UniqueName: \"kubernetes.io/projected/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-kube-api-access-qfc5p\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982864 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.982907 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.983996 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:29:13 crc kubenswrapper[4870]: I0130 08:29:13.993966 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085511 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085551 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085575 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085627 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085655 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-logs\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085678 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085698 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-kube-api-access-tx4qh\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085765 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085815 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfc5p\" (UniqueName: \"kubernetes.io/projected/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-kube-api-access-qfc5p\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085847 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085877 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.085914 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.088043 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.099139 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.102663 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-zzkss"] Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.102906 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.103132 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.135443 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfc5p\" (UniqueName: \"kubernetes.io/projected/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-kube-api-access-qfc5p\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.149777 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.149915 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-logs\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.157042 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187265 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187313 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187335 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-kube-api-access-tx4qh\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187537 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187574 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187606 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.187637 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.190874 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.191440 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.192806 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.193073 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.193435 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.195614 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.196908 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.219003 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-kube-api-access-tx4qh\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.241167 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.309984 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.498034 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c1333f8-2564-4b5c-84b9-0045d742c45f","Type":"ContainerStarted","Data":"15d4837d5c345debdddee40f4775635705eb028fe7633d8e6d5c855f92746c7a"} Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.498143 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api-log" containerID="cri-o://135b22419515f0c37fa07d5cae62ea43515dc6460498408338173f7df4e2361b" gracePeriod=30 Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.498217 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api" containerID="cri-o://15d4837d5c345debdddee40f4775635705eb028fe7633d8e6d5c855f92746c7a" gracePeriod=30 Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.524514 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.524497007 podStartE2EDuration="6.524497007s" podCreationTimestamp="2026-01-30 08:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:14.522801983 +0000 UTC m=+1193.218349082" watchObservedRunningTime="2026-01-30 08:29:14.524497007 +0000 UTC m=+1193.220044116" Jan 30 08:29:14 crc kubenswrapper[4870]: W0130 08:29:14.656192 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3688605b_306e_4093_93d5_b96cae2a80de.slice/crio-38880a59bd8e8ad87943840cbaa98251faf8d234264c0ca4ae49cc1e495e8ef5 WatchSource:0}: Error finding container 38880a59bd8e8ad87943840cbaa98251faf8d234264c0ca4ae49cc1e495e8ef5: Status 404 returned error can't find the container with id 38880a59bd8e8ad87943840cbaa98251faf8d234264c0ca4ae49cc1e495e8ef5 Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.714198 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74569d8966-5sjxs" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.714428 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.774156 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.915493 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-config\") pod \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.915833 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-nb\") pod \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.915950 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-swift-storage-0\") pod \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.916117 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-svc\") pod \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.916152 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-sb\") pod \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.916225 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgd6n\" (UniqueName: \"kubernetes.io/projected/c58c61f5-64cf-4fe3-9792-a9d7b0987188-kube-api-access-sgd6n\") pod \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\" (UID: \"c58c61f5-64cf-4fe3-9792-a9d7b0987188\") " Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.925060 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58c61f5-64cf-4fe3-9792-a9d7b0987188-kube-api-access-sgd6n" (OuterVolumeSpecName: "kube-api-access-sgd6n") pod "c58c61f5-64cf-4fe3-9792-a9d7b0987188" (UID: "c58c61f5-64cf-4fe3-9792-a9d7b0987188"). InnerVolumeSpecName "kube-api-access-sgd6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.947135 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c58c61f5-64cf-4fe3-9792-a9d7b0987188" (UID: "c58c61f5-64cf-4fe3-9792-a9d7b0987188"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.955433 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c58c61f5-64cf-4fe3-9792-a9d7b0987188" (UID: "c58c61f5-64cf-4fe3-9792-a9d7b0987188"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.958161 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c58c61f5-64cf-4fe3-9792-a9d7b0987188" (UID: "c58c61f5-64cf-4fe3-9792-a9d7b0987188"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.964657 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-config" (OuterVolumeSpecName: "config") pod "c58c61f5-64cf-4fe3-9792-a9d7b0987188" (UID: "c58c61f5-64cf-4fe3-9792-a9d7b0987188"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:14 crc kubenswrapper[4870]: I0130 08:29:14.967691 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c58c61f5-64cf-4fe3-9792-a9d7b0987188" (UID: "c58c61f5-64cf-4fe3-9792-a9d7b0987188"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.021061 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgd6n\" (UniqueName: \"kubernetes.io/projected/c58c61f5-64cf-4fe3-9792-a9d7b0987188-kube-api-access-sgd6n\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.021107 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.021121 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.021131 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.021140 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.021149 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c58c61f5-64cf-4fe3-9792-a9d7b0987188-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.538027 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" event={"ID":"c58c61f5-64cf-4fe3-9792-a9d7b0987188","Type":"ContainerDied","Data":"0d8b1a72f03d9ad69f8ac9e36fd394f650e5d7b9d43ccffdaff2ad1160e3aeef"} Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.538085 4870 scope.go:117] "RemoveContainer" containerID="716b6800404b937b1b37ac9d66dd42946bf82bd2065094be653871d4c6645e5a" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.538223 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ddf7d98c-drvjx" Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.544545 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.548717 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.559941 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.560018 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.567051 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" event={"ID":"3688605b-306e-4093-93d5-b96cae2a80de","Type":"ContainerStarted","Data":"38880a59bd8e8ad87943840cbaa98251faf8d234264c0ca4ae49cc1e495e8ef5"} Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.584107 4870 generic.go:334] "Generic (PLEG): container finished" podID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerID="1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42" exitCode=1 Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.584162 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerDied","Data":"1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42"} Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.584693 4870 scope.go:117] "RemoveContainer" containerID="1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42" Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.585513 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8628af25-d5e4-46a0-adec-4c25ca39676b)\"" pod="openstack/watcher-decision-engine-0" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.632881 4870 generic.go:334] "Generic (PLEG): container finished" podID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerID="135b22419515f0c37fa07d5cae62ea43515dc6460498408338173f7df4e2361b" exitCode=143 Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.633113 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c1333f8-2564-4b5c-84b9-0045d742c45f","Type":"ContainerDied","Data":"135b22419515f0c37fa07d5cae62ea43515dc6460498408338173f7df4e2361b"} Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.641858 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f839b4e9-f9f0-489d-b04b-14b03ab6895b","Type":"ContainerStarted","Data":"8eb8ea85632818a452e1edbc831618a751f9abde5e34ca94d13504592653fdb8"} Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.656361 4870 generic.go:334] "Generic (PLEG): container finished" podID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerID="b45a58a1e3e4865b397313616e6494da5d8e1887dd9401a657303e526b984274" exitCode=0 Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.657652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerDied","Data":"b45a58a1e3e4865b397313616e6494da5d8e1887dd9401a657303e526b984274"} Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.682836 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.776160973 podStartE2EDuration="7.68282084s" podCreationTimestamp="2026-01-30 08:29:08 +0000 UTC" firstStartedPulling="2026-01-30 08:29:10.889835039 +0000 UTC m=+1189.585382148" lastFinishedPulling="2026-01-30 08:29:11.796494896 +0000 UTC m=+1190.492042015" observedRunningTime="2026-01-30 08:29:15.672204997 +0000 UTC m=+1194.367752106" watchObservedRunningTime="2026-01-30 08:29:15.68282084 +0000 UTC m=+1194.378367949" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.762415 4870 scope.go:117] "RemoveContainer" containerID="6d57bab3d6c90d46f3ba26f44c864a6ec85718286b3ff7835459214c488726bc" Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.821658 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-drvjx"] Jan 30 08:29:15 crc kubenswrapper[4870]: I0130 08:29:15.830485 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76ddf7d98c-drvjx"] Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.922135 4870 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 08:29:15 crc kubenswrapper[4870]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 08:29:15 crc kubenswrapper[4870]: > podSandboxID="27ed939972c481c8d851e169e12137d6f31939cc5f5f7a13dd1d93784b1b442d" Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.922484 4870 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 08:29:15 crc kubenswrapper[4870]: container &Container{Name:dnsmasq-dns,Image:38.102.83.23:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n598h74h59ch8ch64h599hf9hf7h668hdch8ch597h65bh59ch8dh6hc7h86h57fh649h75h586h655h57fh58bh54dh564h5b8h68fh54bh55h56dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47htr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-656959885f-8m9f8_openstack(9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 08:29:15 crc kubenswrapper[4870]: > logger="UnhandledError" Jan 30 08:29:15 crc kubenswrapper[4870]: E0130 08:29:15.924687 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-656959885f-8m9f8" podUID="9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.096176 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58c61f5-64cf-4fe3-9792-a9d7b0987188" path="/var/lib/kubelet/pods/c58c61f5-64cf-4fe3-9792-a9d7b0987188/volumes" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.229733 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.355650 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdb9v\" (UniqueName: \"kubernetes.io/projected/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-kube-api-access-gdb9v\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.355714 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-sg-core-conf-yaml\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.355753 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-log-httpd\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.355782 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-run-httpd\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.355922 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-combined-ca-bundle\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.356009 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-config-data\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.356051 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-scripts\") pod \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\" (UID: \"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2\") " Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.360093 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.360316 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-scripts" (OuterVolumeSpecName: "scripts") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.360566 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.373215 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-kube-api-access-gdb9v" (OuterVolumeSpecName: "kube-api-access-gdb9v") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "kube-api-access-gdb9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.434482 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.435454 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.460541 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.460571 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdb9v\" (UniqueName: \"kubernetes.io/projected/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-kube-api-access-gdb9v\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.460584 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.460596 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.460606 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.463161 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.523469 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.524340 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-config-data" (OuterVolumeSpecName: "config-data") pod "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" (UID: "3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:16 crc kubenswrapper[4870]: W0130 08:29:16.530292 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f83ba22_1075_4159_b19d_f0b9ceec4ac3.slice/crio-8eeee39b66f7aa76b98a5db5ced5f5152684b0c7a60879e1f3f2cf389449b18e WatchSource:0}: Error finding container 8eeee39b66f7aa76b98a5db5ced5f5152684b0c7a60879e1f3f2cf389449b18e: Status 404 returned error can't find the container with id 8eeee39b66f7aa76b98a5db5ced5f5152684b0c7a60879e1f3f2cf389449b18e Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.561703 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.561732 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.691528 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b94ff658f-bmntr" event={"ID":"a3bc44ff-bc04-4e44-bb13-ff62f43057f5","Type":"ContainerStarted","Data":"3298eed18edf06e451a427ebc9742fc8332e80a14168a95a23ab3fad8b1cb025"} Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.698433 4870 generic.go:334] "Generic (PLEG): container finished" podID="3688605b-306e-4093-93d5-b96cae2a80de" containerID="e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8" exitCode=0 Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.698692 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" event={"ID":"3688605b-306e-4093-93d5-b96cae2a80de","Type":"ContainerDied","Data":"e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8"} Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.734157 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" event={"ID":"8a32795f-6328-4d51-a69a-60be965b17f0","Type":"ContainerStarted","Data":"bc53f0af1a8dc4650c6534fd0f5b47e13a20ad4020c59618ab83f14b87f5c1fb"} Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.821187 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2","Type":"ContainerDied","Data":"0a2dbcce6be2e5137bdbc1dec4f8f525f5301e8818ebf37e38952868cb263db6"} Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.821417 4870 scope.go:117] "RemoveContainer" containerID="c24e43cc0a18413bd641a93cefd91035b18efb88407c422b061290231df7fca8" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.821613 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.826424 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f83ba22-1075-4159-b19d-f0b9ceec4ac3","Type":"ContainerStarted","Data":"8eeee39b66f7aa76b98a5db5ced5f5152684b0c7a60879e1f3f2cf389449b18e"} Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.827277 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0","Type":"ContainerStarted","Data":"fab1f41c1ff636465358dde7b3c49c985eff8cd0920b9b81dab7f18f354ae31d"} Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.886301 4870 scope.go:117] "RemoveContainer" containerID="eb3296110237841074b9a042e7cdf380a70fa9819f62258f821b5124b40eb835" Jan 30 08:29:16 crc kubenswrapper[4870]: I0130 08:29:16.961846 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.045978 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075056 4870 scope.go:117] "RemoveContainer" containerID="b45a58a1e3e4865b397313616e6494da5d8e1887dd9401a657303e526b984274" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075195 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:17 crc kubenswrapper[4870]: E0130 08:29:17.075591 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="proxy-httpd" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075608 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="proxy-httpd" Jan 30 08:29:17 crc kubenswrapper[4870]: E0130 08:29:17.075624 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="sg-core" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075631 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="sg-core" Jan 30 08:29:17 crc kubenswrapper[4870]: E0130 08:29:17.075645 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="ceilometer-notification-agent" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075651 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="ceilometer-notification-agent" Jan 30 08:29:17 crc kubenswrapper[4870]: E0130 08:29:17.075661 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58c61f5-64cf-4fe3-9792-a9d7b0987188" containerName="init" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075667 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58c61f5-64cf-4fe3-9792-a9d7b0987188" containerName="init" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075856 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="ceilometer-notification-agent" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075870 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="proxy-httpd" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075925 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" containerName="sg-core" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.075938 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58c61f5-64cf-4fe3-9792-a9d7b0987188" containerName="init" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.077700 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.087015 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.087211 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.094914 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.193792 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-scripts\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.198548 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jkw\" (UniqueName: \"kubernetes.io/projected/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-kube-api-access-q6jkw\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.198637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-log-httpd\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.198912 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.199060 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-config-data\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.199152 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.199180 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-run-httpd\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302234 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6jkw\" (UniqueName: \"kubernetes.io/projected/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-kube-api-access-q6jkw\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-log-httpd\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302553 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302589 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-config-data\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302614 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302633 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-run-httpd\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.302668 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-scripts\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.303260 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-log-httpd\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.304008 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d69bf9957-gj6dt"] Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.312136 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-run-httpd\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.327719 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-config-data\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.329602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.330866 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d69bf9957-gj6dt"] Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.330979 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.335449 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.337309 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.337487 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.343513 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-scripts\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.347485 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6jkw\" (UniqueName: \"kubernetes.io/projected/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-kube-api-access-q6jkw\") pod \"ceilometer-0\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.405656 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zhw\" (UniqueName: \"kubernetes.io/projected/a50dec5c-d013-42b7-8a60-c405d5c93362-kube-api-access-59zhw\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.405720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-internal-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.405744 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-config\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.405807 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-combined-ca-bundle\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.405868 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-httpd-config\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.406008 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-ovndb-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.406081 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-public-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.509361 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510108 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-internal-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510158 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-config\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510215 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-combined-ca-bundle\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510272 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-httpd-config\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510299 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-ovndb-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510361 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-public-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.510407 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59zhw\" (UniqueName: \"kubernetes.io/projected/a50dec5c-d013-42b7-8a60-c405d5c93362-kube-api-access-59zhw\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.522645 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-public-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.522868 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-config\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.523233 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-internal-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.523579 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-ovndb-tls-certs\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.523671 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-combined-ca-bundle\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.529675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59zhw\" (UniqueName: \"kubernetes.io/projected/a50dec5c-d013-42b7-8a60-c405d5c93362-kube-api-access-59zhw\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.533377 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a50dec5c-d013-42b7-8a60-c405d5c93362-httpd-config\") pod \"neutron-6d69bf9957-gj6dt\" (UID: \"a50dec5c-d013-42b7-8a60-c405d5c93362\") " pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.538151 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.611494 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-sb\") pod \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.611826 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47htr\" (UniqueName: \"kubernetes.io/projected/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-kube-api-access-47htr\") pod \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.611949 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-swift-storage-0\") pod \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.611976 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-nb\") pod \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.612042 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-svc\") pod \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.612105 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-config\") pod \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\" (UID: \"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa\") " Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.634090 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-kube-api-access-47htr" (OuterVolumeSpecName: "kube-api-access-47htr") pod "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" (UID: "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa"). InnerVolumeSpecName "kube-api-access-47htr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.684913 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" (UID: "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.715266 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.715302 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47htr\" (UniqueName: \"kubernetes.io/projected/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-kube-api-access-47htr\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.760390 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" (UID: "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.790373 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.822221 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.847240 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b94ff658f-bmntr" event={"ID":"a3bc44ff-bc04-4e44-bb13-ff62f43057f5","Type":"ContainerStarted","Data":"901d7ab5cd075a77d8bbaef11b7f69e428520ec1d79e2ffb17e7fc2f4b0ce91b"} Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.849682 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" event={"ID":"3688605b-306e-4093-93d5-b96cae2a80de","Type":"ContainerStarted","Data":"0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e"} Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.849938 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.856304 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" event={"ID":"8a32795f-6328-4d51-a69a-60be965b17f0","Type":"ContainerStarted","Data":"73a46e911f963119f7efe1f461bfa36e17f07e688eb4337a0912046a69817e39"} Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.860103 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-config" (OuterVolumeSpecName: "config") pod "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" (UID: "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.865768 4870 generic.go:334] "Generic (PLEG): container finished" podID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" exitCode=137 Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.865825 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d501bb9c-d88d-4362-a48e-4d0347ecc90e","Type":"ContainerDied","Data":"ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc"} Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.883850 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656959885f-8m9f8" event={"ID":"9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa","Type":"ContainerDied","Data":"27ed939972c481c8d851e169e12137d6f31939cc5f5f7a13dd1d93784b1b442d"} Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.883919 4870 scope.go:117] "RemoveContainer" containerID="34ec090b045bffd5d1230bb03e6f1ad024b19b6f7c9f069106bed74b50df542e" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.884009 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656959885f-8m9f8" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.887291 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b94ff658f-bmntr" podStartSLOduration=4.6205938159999995 podStartE2EDuration="9.887271241s" podCreationTimestamp="2026-01-30 08:29:08 +0000 UTC" firstStartedPulling="2026-01-30 08:29:10.341283744 +0000 UTC m=+1189.036830853" lastFinishedPulling="2026-01-30 08:29:15.607961169 +0000 UTC m=+1194.303508278" observedRunningTime="2026-01-30 08:29:17.865399724 +0000 UTC m=+1196.560946833" watchObservedRunningTime="2026-01-30 08:29:17.887271241 +0000 UTC m=+1196.582818350" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.888459 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54fb8bddb6-w78xn" podStartSLOduration=5.899615374 podStartE2EDuration="9.888454369s" podCreationTimestamp="2026-01-30 08:29:08 +0000 UTC" firstStartedPulling="2026-01-30 08:29:11.746807275 +0000 UTC m=+1190.442354384" lastFinishedPulling="2026-01-30 08:29:15.73564627 +0000 UTC m=+1194.431193379" observedRunningTime="2026-01-30 08:29:17.886346983 +0000 UTC m=+1196.581894092" watchObservedRunningTime="2026-01-30 08:29:17.888454369 +0000 UTC m=+1196.584001478" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.897236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0","Type":"ContainerStarted","Data":"2d530828db1e13fb59b8a6f3bf5f6b711b1aef5265c44e2042172678ae798ad8"} Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.907101 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" (UID: "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.914151 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-55b585f57f-9h2lg" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.924662 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.924699 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.932827 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" (UID: "9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:17 crc kubenswrapper[4870]: I0130 08:29:17.963575 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" podStartSLOduration=5.963557168 podStartE2EDuration="5.963557168s" podCreationTimestamp="2026-01-30 08:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:17.913391402 +0000 UTC m=+1196.608938511" watchObservedRunningTime="2026-01-30 08:29:17.963557168 +0000 UTC m=+1196.659104277" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.026164 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.096198 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2" path="/var/lib/kubelet/pods/3bf6a0a0-7d14-4cbd-96e4-c81ac5366fb2/volumes" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.172685 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:18 crc kubenswrapper[4870]: W0130 08:29:18.182252 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbebd196d_f8e4_466e_aa1f_99a65e3c7c6f.slice/crio-ea0f16d0885c4f830bf99a5da540d693451546a651ab72659db0f8c0dde59721 WatchSource:0}: Error finding container ea0f16d0885c4f830bf99a5da540d693451546a651ab72659db0f8c0dde59721: Status 404 returned error can't find the container with id ea0f16d0885c4f830bf99a5da540d693451546a651ab72659db0f8c0dde59721 Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.217993 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.218369 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.223145 4870 scope.go:117] "RemoveContainer" containerID="1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42" Jan 30 08:29:18 crc kubenswrapper[4870]: E0130 08:29:18.223547 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8628af25-d5e4-46a0-adec-4c25ca39676b)\"" pod="openstack/watcher-decision-engine-0" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.283950 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656959885f-8m9f8"] Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.297932 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-656959885f-8m9f8"] Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.384471 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.438759 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d501bb9c-d88d-4362-a48e-4d0347ecc90e-logs\") pod \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.438812 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-config-data\") pod \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.438862 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqd86\" (UniqueName: \"kubernetes.io/projected/d501bb9c-d88d-4362-a48e-4d0347ecc90e-kube-api-access-gqd86\") pod \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.438941 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-combined-ca-bundle\") pod \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\" (UID: \"d501bb9c-d88d-4362-a48e-4d0347ecc90e\") " Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.439202 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d501bb9c-d88d-4362-a48e-4d0347ecc90e-logs" (OuterVolumeSpecName: "logs") pod "d501bb9c-d88d-4362-a48e-4d0347ecc90e" (UID: "d501bb9c-d88d-4362-a48e-4d0347ecc90e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.439494 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d501bb9c-d88d-4362-a48e-4d0347ecc90e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.465108 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d501bb9c-d88d-4362-a48e-4d0347ecc90e-kube-api-access-gqd86" (OuterVolumeSpecName: "kube-api-access-gqd86") pod "d501bb9c-d88d-4362-a48e-4d0347ecc90e" (UID: "d501bb9c-d88d-4362-a48e-4d0347ecc90e"). InnerVolumeSpecName "kube-api-access-gqd86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.501046 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d501bb9c-d88d-4362-a48e-4d0347ecc90e" (UID: "d501bb9c-d88d-4362-a48e-4d0347ecc90e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.541853 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqd86\" (UniqueName: \"kubernetes.io/projected/d501bb9c-d88d-4362-a48e-4d0347ecc90e-kube-api-access-gqd86\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.541909 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.545021 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-config-data" (OuterVolumeSpecName: "config-data") pod "d501bb9c-d88d-4362-a48e-4d0347ecc90e" (UID: "d501bb9c-d88d-4362-a48e-4d0347ecc90e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.643240 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d501bb9c-d88d-4362-a48e-4d0347ecc90e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.774676 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.802640 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d69bf9957-gj6dt"] Jan 30 08:29:18 crc kubenswrapper[4870]: W0130 08:29:18.812940 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda50dec5c_d013_42b7_8a60_c405d5c93362.slice/crio-79e7e3be0b7084487efb953a05edee78110e607fdba9d97ffe461391c9aa3bcb WatchSource:0}: Error finding container 79e7e3be0b7084487efb953a05edee78110e607fdba9d97ffe461391c9aa3bcb: Status 404 returned error can't find the container with id 79e7e3be0b7084487efb953a05edee78110e607fdba9d97ffe461391c9aa3bcb Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.923145 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d69bf9957-gj6dt" event={"ID":"a50dec5c-d013-42b7-8a60-c405d5c93362","Type":"ContainerStarted","Data":"79e7e3be0b7084487efb953a05edee78110e607fdba9d97ffe461391c9aa3bcb"} Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.939618 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d501bb9c-d88d-4362-a48e-4d0347ecc90e","Type":"ContainerDied","Data":"d6051b0c5bd2d63d9f43ae131b460100c45ef28a76e95d9c82c1f29baab7429d"} Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.939669 4870 scope.go:117] "RemoveContainer" containerID="ac4b1626212791c0a56219dd9b2abaf2c7a3d91e287f4b28aad9d510e177e2dc" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.939809 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.950769 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerStarted","Data":"ea0f16d0885c4f830bf99a5da540d693451546a651ab72659db0f8c0dde59721"} Jan 30 08:29:18 crc kubenswrapper[4870]: I0130 08:29:18.967923 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f83ba22-1075-4159-b19d-f0b9ceec4ac3","Type":"ContainerStarted","Data":"a557094499d086ce164562ed8ad45b4d04434481b6d30eb79016392157566f9b"} Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.049949 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.086944 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.105150 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: E0130 08:29:19.105540 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.105557 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:19 crc kubenswrapper[4870]: E0130 08:29:19.105599 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" containerName="init" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.105606 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" containerName="init" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.105775 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" containerName="watcher-applier" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.105795 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" containerName="init" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.106426 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.109419 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.109722 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.160294 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vtgt\" (UniqueName: \"kubernetes.io/projected/4061e0b3-e3ae-4ef0-a979-6028df77da5c-kube-api-access-2vtgt\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.160543 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4061e0b3-e3ae-4ef0-a979-6028df77da5c-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.160585 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4061e0b3-e3ae-4ef0-a979-6028df77da5c-logs\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.160731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4061e0b3-e3ae-4ef0-a979-6028df77da5c-config-data\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.262240 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vtgt\" (UniqueName: \"kubernetes.io/projected/4061e0b3-e3ae-4ef0-a979-6028df77da5c-kube-api-access-2vtgt\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.262651 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4061e0b3-e3ae-4ef0-a979-6028df77da5c-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.262681 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4061e0b3-e3ae-4ef0-a979-6028df77da5c-logs\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.262770 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4061e0b3-e3ae-4ef0-a979-6028df77da5c-config-data\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.264479 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4061e0b3-e3ae-4ef0-a979-6028df77da5c-logs\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.282422 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.283602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4061e0b3-e3ae-4ef0-a979-6028df77da5c-config-data\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.284096 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4061e0b3-e3ae-4ef0-a979-6028df77da5c-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.301415 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vtgt\" (UniqueName: \"kubernetes.io/projected/4061e0b3-e3ae-4ef0-a979-6028df77da5c-kube-api-access-2vtgt\") pod \"watcher-applier-0\" (UID: \"4061e0b3-e3ae-4ef0-a979-6028df77da5c\") " pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.399985 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.469576 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.505616 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.670795 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.766584 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:19 crc kubenswrapper[4870]: I0130 08:29:19.992295 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d69bf9957-gj6dt" event={"ID":"a50dec5c-d013-42b7-8a60-c405d5c93362","Type":"ContainerStarted","Data":"682cb1d7ee584cf4bdfb29f68d57d3bf09cfb5067562aa6aa9e2cf5f1e08d2ee"} Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.023640 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="cinder-scheduler" containerID="cri-o://43e89a562f59d4e0caa713b4d3d7d10459a54c334d8bf93738ef4bfb17bc36b1" gracePeriod=30 Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.024672 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0","Type":"ContainerStarted","Data":"cdbf7af2d49e2059c333123a290e9cf8cf7bad8409952cd6d195d8442d2d2794"} Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.024966 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="probe" containerID="cri-o://8eb8ea85632818a452e1edbc831618a751f9abde5e34ca94d13504592653fdb8" gracePeriod=30 Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.137209 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa" path="/var/lib/kubelet/pods/9b9285b0-ed06-4b59-9f79-a1b7a89ac0aa/volumes" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.138001 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d501bb9c-d88d-4362-a48e-4d0347ecc90e" path="/var/lib/kubelet/pods/d501bb9c-d88d-4362-a48e-4d0347ecc90e/volumes" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.178953 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.178935154 podStartE2EDuration="8.178935154s" podCreationTimestamp="2026-01-30 08:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:20.047435942 +0000 UTC m=+1198.742983071" watchObservedRunningTime="2026-01-30 08:29:20.178935154 +0000 UTC m=+1198.874482263" Jan 30 08:29:20 crc kubenswrapper[4870]: W0130 08:29:20.204008 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4061e0b3_e3ae_4ef0_a979_6028df77da5c.slice/crio-d0e619b7c1f9178e64d9b20d2250c41b27f6a0c4164e338c9e1b9a56d79380fa WatchSource:0}: Error finding container d0e619b7c1f9178e64d9b20d2250c41b27f6a0c4164e338c9e1b9a56d79380fa: Status 404 returned error can't find the container with id d0e619b7c1f9178e64d9b20d2250c41b27f6a0c4164e338c9e1b9a56d79380fa Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.209770 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.384401 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.387946 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.392327 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-427ds" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.392512 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.392643 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.403562 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.494809 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.494904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.494953 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.495016 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpt2x\" (UniqueName: \"kubernetes.io/projected/1a63902b-36ef-479e-8124-86f7a7f3f8db-kube-api-access-cpt2x\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.596920 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.597315 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpt2x\" (UniqueName: \"kubernetes.io/projected/1a63902b-36ef-479e-8124-86f7a7f3f8db-kube-api-access-cpt2x\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.597394 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.597474 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.597754 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.601218 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.617330 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.617832 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpt2x\" (UniqueName: \"kubernetes.io/projected/1a63902b-36ef-479e-8124-86f7a7f3f8db-kube-api-access-cpt2x\") pod \"openstackclient\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.762722 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.763465 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.793796 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.841937 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.843224 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.858789 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.906198 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204a0d39-f7b0-4468-a82f-9fcc49fc1281-combined-ca-bundle\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.906314 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/204a0d39-f7b0-4468-a82f-9fcc49fc1281-openstack-config-secret\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.906345 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/204a0d39-f7b0-4468-a82f-9fcc49fc1281-openstack-config\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:20 crc kubenswrapper[4870]: I0130 08:29:20.906401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzdt\" (UniqueName: \"kubernetes.io/projected/204a0d39-f7b0-4468-a82f-9fcc49fc1281-kube-api-access-dvzdt\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.015244 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204a0d39-f7b0-4468-a82f-9fcc49fc1281-combined-ca-bundle\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.015364 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/204a0d39-f7b0-4468-a82f-9fcc49fc1281-openstack-config-secret\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.015385 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/204a0d39-f7b0-4468-a82f-9fcc49fc1281-openstack-config\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.015437 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzdt\" (UniqueName: \"kubernetes.io/projected/204a0d39-f7b0-4468-a82f-9fcc49fc1281-kube-api-access-dvzdt\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.020554 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/204a0d39-f7b0-4468-a82f-9fcc49fc1281-openstack-config\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.025708 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/204a0d39-f7b0-4468-a82f-9fcc49fc1281-openstack-config-secret\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.039811 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204a0d39-f7b0-4468-a82f-9fcc49fc1281-combined-ca-bundle\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.071704 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzdt\" (UniqueName: \"kubernetes.io/projected/204a0d39-f7b0-4468-a82f-9fcc49fc1281-kube-api-access-dvzdt\") pod \"openstackclient\" (UID: \"204a0d39-f7b0-4468-a82f-9fcc49fc1281\") " pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.087057 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f83ba22-1075-4159-b19d-f0b9ceec4ac3","Type":"ContainerStarted","Data":"fcef3a1004b84a8cdcd9a3aaa97b96b2c634246495cb7db1a07176496c9f5b70"} Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.087211 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-log" containerID="cri-o://a557094499d086ce164562ed8ad45b4d04434481b6d30eb79016392157566f9b" gracePeriod=30 Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.087533 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-httpd" containerID="cri-o://fcef3a1004b84a8cdcd9a3aaa97b96b2c634246495cb7db1a07176496c9f5b70" gracePeriod=30 Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.124263 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4061e0b3-e3ae-4ef0-a979-6028df77da5c","Type":"ContainerStarted","Data":"d2981178746e59630fa9af78b555b16f88ae090d539ab38dace0e408ac984122"} Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.124305 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4061e0b3-e3ae-4ef0-a979-6028df77da5c","Type":"ContainerStarted","Data":"d0e619b7c1f9178e64d9b20d2250c41b27f6a0c4164e338c9e1b9a56d79380fa"} Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.147392 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.147365131 podStartE2EDuration="9.147365131s" podCreationTimestamp="2026-01-30 08:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:21.125210215 +0000 UTC m=+1199.820757324" watchObservedRunningTime="2026-01-30 08:29:21.147365131 +0000 UTC m=+1199.842912240" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.149259 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-log" containerID="cri-o://2d530828db1e13fb59b8a6f3bf5f6b711b1aef5265c44e2042172678ae798ad8" gracePeriod=30 Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.149571 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d69bf9957-gj6dt" event={"ID":"a50dec5c-d013-42b7-8a60-c405d5c93362","Type":"ContainerStarted","Data":"7344b11ec3ba1aa5689326e47ed586eecca8d1c99a0b89bf83ead8f26faf7332"} Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.149725 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-httpd" containerID="cri-o://cdbf7af2d49e2059c333123a290e9cf8cf7bad8409952cd6d195d8442d2d2794" gracePeriod=30 Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.150208 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.174497 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d69bf9957-gj6dt" podStartSLOduration=4.174481752 podStartE2EDuration="4.174481752s" podCreationTimestamp="2026-01-30 08:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:21.172835031 +0000 UTC m=+1199.868382140" watchObservedRunningTime="2026-01-30 08:29:21.174481752 +0000 UTC m=+1199.870028861" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.190125 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.184986302 podStartE2EDuration="3.184986302s" podCreationTimestamp="2026-01-30 08:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:21.149778556 +0000 UTC m=+1199.845325665" watchObservedRunningTime="2026-01-30 08:29:21.184986302 +0000 UTC m=+1199.880533411" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.197351 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.651490 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:21 crc kubenswrapper[4870]: I0130 08:29:21.786040 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.191227 4870 generic.go:334] "Generic (PLEG): container finished" podID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerID="8eb8ea85632818a452e1edbc831618a751f9abde5e34ca94d13504592653fdb8" exitCode=0 Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.191470 4870 generic.go:334] "Generic (PLEG): container finished" podID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerID="43e89a562f59d4e0caa713b4d3d7d10459a54c334d8bf93738ef4bfb17bc36b1" exitCode=0 Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.191506 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f839b4e9-f9f0-489d-b04b-14b03ab6895b","Type":"ContainerDied","Data":"8eb8ea85632818a452e1edbc831618a751f9abde5e34ca94d13504592653fdb8"} Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.191529 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f839b4e9-f9f0-489d-b04b-14b03ab6895b","Type":"ContainerDied","Data":"43e89a562f59d4e0caa713b4d3d7d10459a54c334d8bf93738ef4bfb17bc36b1"} Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.193568 4870 generic.go:334] "Generic (PLEG): container finished" podID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerID="fcef3a1004b84a8cdcd9a3aaa97b96b2c634246495cb7db1a07176496c9f5b70" exitCode=0 Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.193599 4870 generic.go:334] "Generic (PLEG): container finished" podID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerID="a557094499d086ce164562ed8ad45b4d04434481b6d30eb79016392157566f9b" exitCode=143 Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.193638 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f83ba22-1075-4159-b19d-f0b9ceec4ac3","Type":"ContainerDied","Data":"fcef3a1004b84a8cdcd9a3aaa97b96b2c634246495cb7db1a07176496c9f5b70"} Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.193665 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f83ba22-1075-4159-b19d-f0b9ceec4ac3","Type":"ContainerDied","Data":"a557094499d086ce164562ed8ad45b4d04434481b6d30eb79016392157566f9b"} Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.201906 4870 generic.go:334] "Generic (PLEG): container finished" podID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerID="cdbf7af2d49e2059c333123a290e9cf8cf7bad8409952cd6d195d8442d2d2794" exitCode=0 Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.201937 4870 generic.go:334] "Generic (PLEG): container finished" podID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerID="2d530828db1e13fb59b8a6f3bf5f6b711b1aef5265c44e2042172678ae798ad8" exitCode=143 Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.202774 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0","Type":"ContainerDied","Data":"cdbf7af2d49e2059c333123a290e9cf8cf7bad8409952cd6d195d8442d2d2794"} Jan 30 08:29:22 crc kubenswrapper[4870]: I0130 08:29:22.202799 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0","Type":"ContainerDied","Data":"2d530828db1e13fb59b8a6f3bf5f6b711b1aef5265c44e2042172678ae798ad8"} Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.226110 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.317801 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.326641 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-66fsw"] Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.353168 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerName="dnsmasq-dns" containerID="cri-o://893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb" gracePeriod=10 Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.475347 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484005 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484461 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfc5p\" (UniqueName: \"kubernetes.io/projected/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-kube-api-access-qfc5p\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484500 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-config-data\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484533 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-logs\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484662 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484702 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-scripts\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484800 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-combined-ca-bundle\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.484815 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-httpd-run\") pod \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\" (UID: \"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.485602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.498227 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-logs" (OuterVolumeSpecName: "logs") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.505079 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-scripts" (OuterVolumeSpecName: "scripts") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.509121 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-kube-api-access-qfc5p" (OuterVolumeSpecName: "kube-api-access-qfc5p") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "kube-api-access-qfc5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.515039 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586064 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-config-data\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586345 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcs4s\" (UniqueName: \"kubernetes.io/projected/f839b4e9-f9f0-489d-b04b-14b03ab6895b-kube-api-access-wcs4s\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586465 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-scripts\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586573 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-httpd-run\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586651 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data-custom\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586745 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-logs\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.586993 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-scripts\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.587072 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-kube-api-access-tx4qh\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.587238 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f839b4e9-f9f0-489d-b04b-14b03ab6895b-etc-machine-id\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.587312 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.587398 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.587484 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-combined-ca-bundle\") pod \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\" (UID: \"9f83ba22-1075-4159-b19d-f0b9ceec4ac3\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.589015 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfc5p\" (UniqueName: \"kubernetes.io/projected/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-kube-api-access-qfc5p\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.589118 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.589189 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.589245 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.589308 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.589510 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f839b4e9-f9f0-489d-b04b-14b03ab6895b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.595190 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.599233 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-logs" (OuterVolumeSpecName: "logs") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.605059 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.622079 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.633831 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-scripts" (OuterVolumeSpecName: "scripts") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.637030 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f839b4e9-f9f0-489d-b04b-14b03ab6895b-kube-api-access-wcs4s" (OuterVolumeSpecName: "kube-api-access-wcs4s") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "kube-api-access-wcs4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.637150 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-kube-api-access-tx4qh" (OuterVolumeSpecName: "kube-api-access-tx4qh") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "kube-api-access-tx4qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.646013 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-config-data" (OuterVolumeSpecName: "config-data") pod "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" (UID: "f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.646130 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.659014 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-scripts" (OuterVolumeSpecName: "scripts") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.667268 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.669076 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692083 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692118 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692146 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692157 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692167 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-kube-api-access-tx4qh\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692178 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692187 4870 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f839b4e9-f9f0-489d-b04b-14b03ab6895b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692194 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692202 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcs4s\" (UniqueName: \"kubernetes.io/projected/f839b4e9-f9f0-489d-b04b-14b03ab6895b-kube-api-access-wcs4s\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692209 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692217 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.692224 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.755141 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.766602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.792280 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-config-data" (OuterVolumeSpecName: "config-data") pod "9f83ba22-1075-4159-b19d-f0b9ceec4ac3" (UID: "9f83ba22-1075-4159-b19d-f0b9ceec4ac3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.798035 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.811147 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle\") pod \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\" (UID: \"f839b4e9-f9f0-489d-b04b-14b03ab6895b\") " Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.811797 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.811815 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.811827 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f83ba22-1075-4159-b19d-f0b9ceec4ac3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: W0130 08:29:23.811917 4870 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f839b4e9-f9f0-489d-b04b-14b03ab6895b/volumes/kubernetes.io~secret/combined-ca-bundle Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.811929 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: E0130 08:29:23.848133 4870 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 08:29:23 crc kubenswrapper[4870]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_1a63902b-36ef-479e-8124-86f7a7f3f8db_0(a4ead0b94abe15dba32bbce83dc44d7770575e44e2d3952e71c8eacb73dcbb67): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a4ead0b94abe15dba32bbce83dc44d7770575e44e2d3952e71c8eacb73dcbb67" Netns:"/var/run/netns/75082fd7-70b5-4f46-b731-308218064a72" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a4ead0b94abe15dba32bbce83dc44d7770575e44e2d3952e71c8eacb73dcbb67;K8S_POD_UID=1a63902b-36ef-479e-8124-86f7a7f3f8db" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/1a63902b-36ef-479e-8124-86f7a7f3f8db]: expected pod UID "1a63902b-36ef-479e-8124-86f7a7f3f8db" but got "204a0d39-f7b0-4468-a82f-9fcc49fc1281" from Kube API Jan 30 08:29:23 crc kubenswrapper[4870]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 08:29:23 crc kubenswrapper[4870]: > Jan 30 08:29:23 crc kubenswrapper[4870]: E0130 08:29:23.848208 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 08:29:23 crc kubenswrapper[4870]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_1a63902b-36ef-479e-8124-86f7a7f3f8db_0(a4ead0b94abe15dba32bbce83dc44d7770575e44e2d3952e71c8eacb73dcbb67): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a4ead0b94abe15dba32bbce83dc44d7770575e44e2d3952e71c8eacb73dcbb67" Netns:"/var/run/netns/75082fd7-70b5-4f46-b731-308218064a72" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a4ead0b94abe15dba32bbce83dc44d7770575e44e2d3952e71c8eacb73dcbb67;K8S_POD_UID=1a63902b-36ef-479e-8124-86f7a7f3f8db" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/1a63902b-36ef-479e-8124-86f7a7f3f8db]: expected pod UID "1a63902b-36ef-479e-8124-86f7a7f3f8db" but got "204a0d39-f7b0-4468-a82f-9fcc49fc1281" from Kube API Jan 30 08:29:23 crc kubenswrapper[4870]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 08:29:23 crc kubenswrapper[4870]: > pod="openstack/openstackclient" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.897011 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data" (OuterVolumeSpecName: "config-data") pod "f839b4e9-f9f0-489d-b04b-14b03ab6895b" (UID: "f839b4e9-f9f0-489d-b04b-14b03ab6895b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.914508 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:23 crc kubenswrapper[4870]: I0130 08:29:23.914540 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f839b4e9-f9f0-489d-b04b-14b03ab6895b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.055409 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.115734 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.227633 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-svc\") pod \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.227667 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-nb\") pod \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.227745 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-sb\") pod \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.227860 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-swift-storage-0\") pod \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.227912 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zw7\" (UniqueName: \"kubernetes.io/projected/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-kube-api-access-r7zw7\") pod \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.227940 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-config\") pod \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\" (UID: \"7fc2a1f3-54bc-4554-a413-69bc35b58a2f\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.236743 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"204a0d39-f7b0-4468-a82f-9fcc49fc1281","Type":"ContainerStarted","Data":"9bd41cb2f723a62f616dfdcd31d5ddca0f265559ea5e020a6d33f3e5a2804f90"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.236782 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-kube-api-access-r7zw7" (OuterVolumeSpecName: "kube-api-access-r7zw7") pod "7fc2a1f3-54bc-4554-a413-69bc35b58a2f" (UID: "7fc2a1f3-54bc-4554-a413-69bc35b58a2f"). InnerVolumeSpecName "kube-api-access-r7zw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.250390 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9f83ba22-1075-4159-b19d-f0b9ceec4ac3","Type":"ContainerDied","Data":"8eeee39b66f7aa76b98a5db5ced5f5152684b0c7a60879e1f3f2cf389449b18e"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.250414 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.250436 4870 scope.go:117] "RemoveContainer" containerID="fcef3a1004b84a8cdcd9a3aaa97b96b2c634246495cb7db1a07176496c9f5b70" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.256641 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0","Type":"ContainerDied","Data":"fab1f41c1ff636465358dde7b3c49c985eff8cd0920b9b81dab7f18f354ae31d"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.256717 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.263274 4870 generic.go:334] "Generic (PLEG): container finished" podID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerID="893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb" exitCode=0 Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.263331 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" event={"ID":"7fc2a1f3-54bc-4554-a413-69bc35b58a2f","Type":"ContainerDied","Data":"893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.263355 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" event={"ID":"7fc2a1f3-54bc-4554-a413-69bc35b58a2f","Type":"ContainerDied","Data":"7e31fa7de08f6fe5e038c395ec730d87de7703f0769687addc3ef103068cb495"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.263410 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbb4d475f-66fsw" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.268798 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f839b4e9-f9f0-489d-b04b-14b03ab6895b","Type":"ContainerDied","Data":"104aebc0ebfaddceac864d04e025d33e5cf5c1cacba0ac14afe67f72e174ead6"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.268897 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.278913 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.279983 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerStarted","Data":"532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.280038 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerStarted","Data":"a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36"} Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.313646 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fc2a1f3-54bc-4554-a413-69bc35b58a2f" (UID: "7fc2a1f3-54bc-4554-a413-69bc35b58a2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.322912 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7fc2a1f3-54bc-4554-a413-69bc35b58a2f" (UID: "7fc2a1f3-54bc-4554-a413-69bc35b58a2f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.330157 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.330181 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.330192 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zw7\" (UniqueName: \"kubernetes.io/projected/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-kube-api-access-r7zw7\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.330660 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fc2a1f3-54bc-4554-a413-69bc35b58a2f" (UID: "7fc2a1f3-54bc-4554-a413-69bc35b58a2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.353055 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5564cc7ccb-wnwrs"] Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.353930 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerName="init" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.353943 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerName="init" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.353971 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="cinder-scheduler" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.353980 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="cinder-scheduler" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.353995 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-httpd" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354001 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-httpd" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.354022 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-httpd" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354028 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-httpd" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.354057 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="probe" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354063 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="probe" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.354077 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-log" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354086 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-log" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.354094 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-log" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354101 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-log" Jan 30 08:29:24 crc kubenswrapper[4870]: E0130 08:29:24.354125 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerName="dnsmasq-dns" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354133 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerName="dnsmasq-dns" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354449 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-log" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354461 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-log" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354476 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" containerName="dnsmasq-dns" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354493 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="probe" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354515 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" containerName="glance-httpd" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354526 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" containerName="glance-httpd" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.354545 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" containerName="cinder-scheduler" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.392325 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fc2a1f3-54bc-4554-a413-69bc35b58a2f" (UID: "7fc2a1f3-54bc-4554-a413-69bc35b58a2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.407176 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5564cc7ccb-wnwrs"] Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.409085 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.410380 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-config" (OuterVolumeSpecName: "config") pod "7fc2a1f3-54bc-4554-a413-69bc35b58a2f" (UID: "7fc2a1f3-54bc-4554-a413-69bc35b58a2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.413394 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.429327 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.438792 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.438894 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.438914 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc2a1f3-54bc-4554-a413-69bc35b58a2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.507155 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540434 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8zfh\" (UniqueName: \"kubernetes.io/projected/304a486b-b7cf-4418-82c9-7795b2331284-kube-api-access-b8zfh\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540484 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-public-tls-certs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540576 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-config-data-custom\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540602 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-combined-ca-bundle\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540645 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-config-data\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540670 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-internal-tls-certs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.540705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/304a486b-b7cf-4418-82c9-7795b2331284-logs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.644821 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-config-data-custom\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.644873 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-combined-ca-bundle\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.644934 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-config-data\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.644967 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-internal-tls-certs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.645002 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/304a486b-b7cf-4418-82c9-7795b2331284-logs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.645035 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8zfh\" (UniqueName: \"kubernetes.io/projected/304a486b-b7cf-4418-82c9-7795b2331284-kube-api-access-b8zfh\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.645058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-public-tls-certs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.646482 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-config-data-custom\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.646774 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/304a486b-b7cf-4418-82c9-7795b2331284-logs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.650528 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-internal-tls-certs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.651246 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-config-data\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.651713 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-combined-ca-bundle\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.652608 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/304a486b-b7cf-4418-82c9-7795b2331284-public-tls-certs\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.683753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8zfh\" (UniqueName: \"kubernetes.io/projected/304a486b-b7cf-4418-82c9-7795b2331284-kube-api-access-b8zfh\") pod \"barbican-api-5564cc7ccb-wnwrs\" (UID: \"304a486b-b7cf-4418-82c9-7795b2331284\") " pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.710919 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74569d8966-5sjxs" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.803052 4870 scope.go:117] "RemoveContainer" containerID="a557094499d086ce164562ed8ad45b4d04434481b6d30eb79016392157566f9b" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.814587 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.826351 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1a63902b-36ef-479e-8124-86f7a7f3f8db" podUID="204a0d39-f7b0-4468-a82f-9fcc49fc1281" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.852262 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config-secret\") pod \"1a63902b-36ef-479e-8124-86f7a7f3f8db\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.852314 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config\") pod \"1a63902b-36ef-479e-8124-86f7a7f3f8db\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.852369 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-combined-ca-bundle\") pod \"1a63902b-36ef-479e-8124-86f7a7f3f8db\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.852399 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpt2x\" (UniqueName: \"kubernetes.io/projected/1a63902b-36ef-479e-8124-86f7a7f3f8db-kube-api-access-cpt2x\") pod \"1a63902b-36ef-479e-8124-86f7a7f3f8db\" (UID: \"1a63902b-36ef-479e-8124-86f7a7f3f8db\") " Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.854164 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1a63902b-36ef-479e-8124-86f7a7f3f8db" (UID: "1a63902b-36ef-479e-8124-86f7a7f3f8db"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.865626 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a63902b-36ef-479e-8124-86f7a7f3f8db-kube-api-access-cpt2x" (OuterVolumeSpecName: "kube-api-access-cpt2x") pod "1a63902b-36ef-479e-8124-86f7a7f3f8db" (UID: "1a63902b-36ef-479e-8124-86f7a7f3f8db"). InnerVolumeSpecName "kube-api-access-cpt2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.870085 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a63902b-36ef-479e-8124-86f7a7f3f8db" (UID: "1a63902b-36ef-479e-8124-86f7a7f3f8db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.884670 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1a63902b-36ef-479e-8124-86f7a7f3f8db" (UID: "1a63902b-36ef-479e-8124-86f7a7f3f8db"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.955760 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.955796 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a63902b-36ef-479e-8124-86f7a7f3f8db-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.955805 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a63902b-36ef-479e-8124-86f7a7f3f8db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.955816 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpt2x\" (UniqueName: \"kubernetes.io/projected/1a63902b-36ef-479e-8124-86f7a7f3f8db-kube-api-access-cpt2x\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:24 crc kubenswrapper[4870]: I0130 08:29:24.967870 4870 scope.go:117] "RemoveContainer" containerID="cdbf7af2d49e2059c333123a290e9cf8cf7bad8409952cd6d195d8442d2d2794" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.023106 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.032030 4870 scope.go:117] "RemoveContainer" containerID="2d530828db1e13fb59b8a6f3bf5f6b711b1aef5265c44e2042172678ae798ad8" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.047374 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.051536 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.071538 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.085949 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.095521 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.097018 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.113398 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-66fsw"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.117404 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.122581 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fbb4d475f-66fsw"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.134052 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.146448 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.157899 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.160488 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.160552 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.160604 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.160677 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.160711 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.160727 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56k4\" (UniqueName: \"kubernetes.io/projected/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-kube-api-access-r56k4\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.173402 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.175090 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.180267 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.180337 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.180565 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.180720 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-58ht6" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.182472 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.207921 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.209438 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.218920 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.219094 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.226078 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.264747 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.264808 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.264835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56k4\" (UniqueName: \"kubernetes.io/projected/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-kube-api-access-r56k4\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.264947 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.264991 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.265043 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.265271 4870 scope.go:117] "RemoveContainer" containerID="893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.266997 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.278473 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.278592 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.288701 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.288750 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.293364 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56k4\" (UniqueName: \"kubernetes.io/projected/e7a1bbc0-d212-4a83-bea0-d40c261ddb18-kube-api-access-r56k4\") pod \"cinder-scheduler-0\" (UID: \"e7a1bbc0-d212-4a83-bea0-d40c261ddb18\") " pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.322688 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.349162 4870 scope.go:117] "RemoveContainer" containerID="38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366492 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366535 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366594 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366612 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366633 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366656 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-logs\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366674 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366693 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366734 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.366942 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46d96\" (UniqueName: \"kubernetes.io/projected/65bb64e7-45f2-4b8d-94f0-34c21ac75042-kube-api-access-46d96\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.367316 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp296\" (UniqueName: \"kubernetes.io/projected/01e7af93-8480-4484-9558-5455eb00fa2b-kube-api-access-bp296\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.367339 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.367406 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.367422 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-logs\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.367451 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.369254 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1a63902b-36ef-479e-8124-86f7a7f3f8db" podUID="204a0d39-f7b0-4468-a82f-9fcc49fc1281" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.402300 4870 scope.go:117] "RemoveContainer" containerID="893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb" Jan 30 08:29:25 crc kubenswrapper[4870]: E0130 08:29:25.411076 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb\": container with ID starting with 893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb not found: ID does not exist" containerID="893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.411131 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb"} err="failed to get container status \"893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb\": rpc error: code = NotFound desc = could not find container \"893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb\": container with ID starting with 893bc6818effa10ecee42122ccd6f6afc59777130894069439f5549f0dfabacb not found: ID does not exist" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.411158 4870 scope.go:117] "RemoveContainer" containerID="38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c" Jan 30 08:29:25 crc kubenswrapper[4870]: E0130 08:29:25.414732 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c\": container with ID starting with 38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c not found: ID does not exist" containerID="38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.414762 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c"} err="failed to get container status \"38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c\": rpc error: code = NotFound desc = could not find container \"38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c\": container with ID starting with 38a1debf6b26f10553fce6710f3f79987242363403d1aac4313b2a726405eb4c not found: ID does not exist" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.414800 4870 scope.go:117] "RemoveContainer" containerID="8eb8ea85632818a452e1edbc831618a751f9abde5e34ca94d13504592653fdb8" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469113 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469160 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469183 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469217 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46d96\" (UniqueName: \"kubernetes.io/projected/65bb64e7-45f2-4b8d-94f0-34c21ac75042-kube-api-access-46d96\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469254 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp296\" (UniqueName: \"kubernetes.io/projected/01e7af93-8480-4484-9558-5455eb00fa2b-kube-api-access-bp296\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469271 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469309 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469327 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-logs\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469366 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469418 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469448 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469485 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469517 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469536 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469553 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.469576 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-logs\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.470153 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-logs\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.470439 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-logs\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.471332 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.471431 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.472410 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.481713 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.484247 4870 scope.go:117] "RemoveContainer" containerID="43e89a562f59d4e0caa713b4d3d7d10459a54c334d8bf93738ef4bfb17bc36b1" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.484974 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.488241 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.496488 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.497451 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.498066 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.498480 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.499031 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.505461 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp296\" (UniqueName: \"kubernetes.io/projected/01e7af93-8480-4484-9558-5455eb00fa2b-kube-api-access-bp296\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.508953 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.511118 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46d96\" (UniqueName: \"kubernetes.io/projected/65bb64e7-45f2-4b8d-94f0-34c21ac75042-kube-api-access-46d96\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.520119 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.545430 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.590530 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.621488 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.637615 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:25 crc kubenswrapper[4870]: I0130 08:29:25.820565 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5564cc7ccb-wnwrs"] Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.098091 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a63902b-36ef-479e-8124-86f7a7f3f8db" path="/var/lib/kubelet/pods/1a63902b-36ef-479e-8124-86f7a7f3f8db/volumes" Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.098569 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc2a1f3-54bc-4554-a413-69bc35b58a2f" path="/var/lib/kubelet/pods/7fc2a1f3-54bc-4554-a413-69bc35b58a2f/volumes" Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.105004 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f83ba22-1075-4159-b19d-f0b9ceec4ac3" path="/var/lib/kubelet/pods/9f83ba22-1075-4159-b19d-f0b9ceec4ac3/volumes" Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.109068 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0" path="/var/lib/kubelet/pods/f7ab1ad2-11f4-4117-a7c9-9fbfe6e4cbc0/volumes" Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.109726 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f839b4e9-f9f0-489d-b04b-14b03ab6895b" path="/var/lib/kubelet/pods/f839b4e9-f9f0-489d-b04b-14b03ab6895b/volumes" Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.318661 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.360439 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerStarted","Data":"d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08"} Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.369607 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5564cc7ccb-wnwrs" event={"ID":"304a486b-b7cf-4418-82c9-7795b2331284","Type":"ContainerStarted","Data":"f7e5f6b0e4f969e3e5827d9a0ddf7fe62514654e337c73550a66c3dccfc2ec73"} Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.369680 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5564cc7ccb-wnwrs" event={"ID":"304a486b-b7cf-4418-82c9-7795b2331284","Type":"ContainerStarted","Data":"79c4f3c21771817b6dd7e4cec0be32db1ab83c3de292da2f51b729907cc29073"} Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.551969 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:26 crc kubenswrapper[4870]: I0130 08:29:26.650850 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.384138 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7a1bbc0-d212-4a83-bea0-d40c261ddb18","Type":"ContainerStarted","Data":"2353d39e0d2cc696c9a74d198b852c6958c449f8e29f1bc712e4ca7a874c28b7"} Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.388474 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65bb64e7-45f2-4b8d-94f0-34c21ac75042","Type":"ContainerStarted","Data":"efd4740849ad794d4e57051943e44782204bcc76846a706c3d143892f1a69cb3"} Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.401542 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5564cc7ccb-wnwrs" event={"ID":"304a486b-b7cf-4418-82c9-7795b2331284","Type":"ContainerStarted","Data":"fd619663bd6a9506b679253344163273ee3ee7b2cf9826d6c161e14947ad6cde"} Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.402633 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.402675 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.404442 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e7af93-8480-4484-9558-5455eb00fa2b","Type":"ContainerStarted","Data":"eda82349da897444026066edb7a8f71a1933756e2aef786c074692bf323e90ef"} Jan 30 08:29:27 crc kubenswrapper[4870]: I0130 08:29:27.427372 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5564cc7ccb-wnwrs" podStartSLOduration=3.427353742 podStartE2EDuration="3.427353742s" podCreationTimestamp="2026-01-30 08:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:27.426101782 +0000 UTC m=+1206.121648891" watchObservedRunningTime="2026-01-30 08:29:27.427353742 +0000 UTC m=+1206.122900851" Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.217977 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.218554 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.219331 4870 scope.go:117] "RemoveContainer" containerID="1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42" Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.424361 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e7af93-8480-4484-9558-5455eb00fa2b","Type":"ContainerStarted","Data":"b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe"} Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.430097 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerStarted","Data":"655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7"} Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.430242 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.432163 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7a1bbc0-d212-4a83-bea0-d40c261ddb18","Type":"ContainerStarted","Data":"33e244873a0e2a71b8bf3785a2b8928bcd1c3e50dc1af9364a6614c001d155b0"} Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.434342 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65bb64e7-45f2-4b8d-94f0-34c21ac75042","Type":"ContainerStarted","Data":"2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1"} Jan 30 08:29:28 crc kubenswrapper[4870]: I0130 08:29:28.459186 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.470738873 podStartE2EDuration="12.45916508s" podCreationTimestamp="2026-01-30 08:29:16 +0000 UTC" firstStartedPulling="2026-01-30 08:29:18.185244124 +0000 UTC m=+1196.880791233" lastFinishedPulling="2026-01-30 08:29:27.173670331 +0000 UTC m=+1205.869217440" observedRunningTime="2026-01-30 08:29:28.452796219 +0000 UTC m=+1207.148343328" watchObservedRunningTime="2026-01-30 08:29:28.45916508 +0000 UTC m=+1207.154712199" Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.452783 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerStarted","Data":"bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba"} Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.455220 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e7af93-8480-4484-9558-5455eb00fa2b","Type":"ContainerStarted","Data":"5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5"} Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.456858 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7a1bbc0-d212-4a83-bea0-d40c261ddb18","Type":"ContainerStarted","Data":"b5de64248c4680443dbd47aee692a183cc11004db6a62b46d676989b11c3d021"} Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.462286 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65bb64e7-45f2-4b8d-94f0-34c21ac75042","Type":"ContainerStarted","Data":"7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18"} Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.508217 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.530300 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.530276352 podStartE2EDuration="4.530276352s" podCreationTimestamp="2026-01-30 08:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:29.51616901 +0000 UTC m=+1208.211716119" watchObservedRunningTime="2026-01-30 08:29:29.530276352 +0000 UTC m=+1208.225823461" Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.560772 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.560754741 podStartE2EDuration="4.560754741s" podCreationTimestamp="2026-01-30 08:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:29.544943244 +0000 UTC m=+1208.240490353" watchObservedRunningTime="2026-01-30 08:29:29.560754741 +0000 UTC m=+1208.256301850" Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.578469 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 30 08:29:29 crc kubenswrapper[4870]: I0130 08:29:29.581839 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.581821702 podStartE2EDuration="4.581821702s" podCreationTimestamp="2026-01-30 08:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:29.574941257 +0000 UTC m=+1208.270488356" watchObservedRunningTime="2026-01-30 08:29:29.581821702 +0000 UTC m=+1208.277368811" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.356912 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.362758 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.365777 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56cfc8cc98-pfz9w" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429457 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-config-data\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429498 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-scripts\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429526 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdz8p\" (UniqueName: \"kubernetes.io/projected/1872a14d-aeff-46f7-8430-c6fe0eb6973b-kube-api-access-fdz8p\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429584 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-tls-certs\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429624 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-combined-ca-bundle\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429754 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1872a14d-aeff-46f7-8430-c6fe0eb6973b-logs\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.429817 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-secret-key\") pod \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\" (UID: \"1872a14d-aeff-46f7-8430-c6fe0eb6973b\") " Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.435151 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1872a14d-aeff-46f7-8430-c6fe0eb6973b-kube-api-access-fdz8p" (OuterVolumeSpecName: "kube-api-access-fdz8p") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "kube-api-access-fdz8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.435414 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1872a14d-aeff-46f7-8430-c6fe0eb6973b-logs" (OuterVolumeSpecName: "logs") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.446121 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.467554 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-scripts" (OuterVolumeSpecName: "scripts") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.473561 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-config-data" (OuterVolumeSpecName: "config-data") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.492697 4870 generic.go:334] "Generic (PLEG): container finished" podID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerID="43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25" exitCode=137 Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.492706 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.492808 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74569d8966-5sjxs" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.492827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74569d8966-5sjxs" event={"ID":"1872a14d-aeff-46f7-8430-c6fe0eb6973b","Type":"ContainerDied","Data":"43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25"} Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.496482 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74569d8966-5sjxs" event={"ID":"1872a14d-aeff-46f7-8430-c6fe0eb6973b","Type":"ContainerDied","Data":"f7250a53827f362fa55ae4df1436ef860d73a09ab3dfd65756154cbbf24973a7"} Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.496513 4870 scope.go:117] "RemoveContainer" containerID="960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.532030 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1872a14d-aeff-46f7-8430-c6fe0eb6973b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.532287 4870 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.532296 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.532307 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1872a14d-aeff-46f7-8430-c6fe0eb6973b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.532316 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdz8p\" (UniqueName: \"kubernetes.io/projected/1872a14d-aeff-46f7-8430-c6fe0eb6973b-kube-api-access-fdz8p\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.532325 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.543772 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1872a14d-aeff-46f7-8430-c6fe0eb6973b" (UID: "1872a14d-aeff-46f7-8430-c6fe0eb6973b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.575053 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.592788 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.634439 4870 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1872a14d-aeff-46f7-8430-c6fe0eb6973b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.688207 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-847c478677-wtndf"] Jan 30 08:29:30 crc kubenswrapper[4870]: E0130 08:29:30.688602 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.688619 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" Jan 30 08:29:30 crc kubenswrapper[4870]: E0130 08:29:30.688635 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon-log" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.688643 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon-log" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.688823 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon-log" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.688860 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" containerName="horizon" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.692838 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.697422 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.697496 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.701430 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.705626 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-847c478677-wtndf"] Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.733042 4870 scope.go:117] "RemoveContainer" containerID="43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739015 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-internal-tls-certs\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739261 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-config-data\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739287 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01b58ab-bb54-448b-83de-f70f08378751-etc-swift\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739340 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2xq\" (UniqueName: \"kubernetes.io/projected/c01b58ab-bb54-448b-83de-f70f08378751-kube-api-access-hc2xq\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739360 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-combined-ca-bundle\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739375 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-public-tls-certs\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739454 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c01b58ab-bb54-448b-83de-f70f08378751-log-httpd\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.739474 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c01b58ab-bb54-448b-83de-f70f08378751-run-httpd\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.807266 4870 scope.go:117] "RemoveContainer" containerID="960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3" Jan 30 08:29:30 crc kubenswrapper[4870]: E0130 08:29:30.808115 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3\": container with ID starting with 960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3 not found: ID does not exist" containerID="960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.808152 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3"} err="failed to get container status \"960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3\": rpc error: code = NotFound desc = could not find container \"960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3\": container with ID starting with 960b2516ed61848524349c99edfb46a19e044030828811dfd8fbc7164a66aca3 not found: ID does not exist" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.808178 4870 scope.go:117] "RemoveContainer" containerID="43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25" Jan 30 08:29:30 crc kubenswrapper[4870]: E0130 08:29:30.812015 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25\": container with ID starting with 43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25 not found: ID does not exist" containerID="43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.812060 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25"} err="failed to get container status \"43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25\": rpc error: code = NotFound desc = could not find container \"43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25\": container with ID starting with 43897c9b161d8b60a5a17a99eb44b17b490c36c5693b312f71be7235a7fc2b25 not found: ID does not exist" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.837957 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74569d8966-5sjxs"] Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.840794 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc2xq\" (UniqueName: \"kubernetes.io/projected/c01b58ab-bb54-448b-83de-f70f08378751-kube-api-access-hc2xq\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.840849 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-combined-ca-bundle\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.840873 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-public-tls-certs\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c01b58ab-bb54-448b-83de-f70f08378751-log-httpd\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c01b58ab-bb54-448b-83de-f70f08378751-run-httpd\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841126 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-internal-tls-certs\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841168 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-config-data\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841192 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01b58ab-bb54-448b-83de-f70f08378751-etc-swift\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841634 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c01b58ab-bb54-448b-83de-f70f08378751-run-httpd\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.841896 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c01b58ab-bb54-448b-83de-f70f08378751-log-httpd\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.846977 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-internal-tls-certs\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.848465 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-config-data\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.850392 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-public-tls-certs\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.851324 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01b58ab-bb54-448b-83de-f70f08378751-etc-swift\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.853616 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01b58ab-bb54-448b-83de-f70f08378751-combined-ca-bundle\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.856087 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74569d8966-5sjxs"] Jan 30 08:29:30 crc kubenswrapper[4870]: I0130 08:29:30.857793 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc2xq\" (UniqueName: \"kubernetes.io/projected/c01b58ab-bb54-448b-83de-f70f08378751-kube-api-access-hc2xq\") pod \"swift-proxy-847c478677-wtndf\" (UID: \"c01b58ab-bb54-448b-83de-f70f08378751\") " pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:31 crc kubenswrapper[4870]: I0130 08:29:31.033331 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:31 crc kubenswrapper[4870]: I0130 08:29:31.762109 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-847c478677-wtndf"] Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.100328 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1872a14d-aeff-46f7-8430-c6fe0eb6973b" path="/var/lib/kubelet/pods/1872a14d-aeff-46f7-8430-c6fe0eb6973b/volumes" Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.533021 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-847c478677-wtndf" event={"ID":"c01b58ab-bb54-448b-83de-f70f08378751","Type":"ContainerStarted","Data":"6e94f5f8fccf94dbe96dda526963c5ae06002d1ed02a8fdb5abbcd121fc34708"} Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.533327 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-847c478677-wtndf" event={"ID":"c01b58ab-bb54-448b-83de-f70f08378751","Type":"ContainerStarted","Data":"d37875ada57cad08b3b77c2e55d1001af08440ce1b1712105d5ba2117cae59f6"} Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.533339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-847c478677-wtndf" event={"ID":"c01b58ab-bb54-448b-83de-f70f08378751","Type":"ContainerStarted","Data":"34c5c1b354646482c74b5ebbf08ef3e5f83e7b9bbb0b5a32ae883cfed8df6540"} Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.533650 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.533740 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:32 crc kubenswrapper[4870]: I0130 08:29:32.558472 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-847c478677-wtndf" podStartSLOduration=2.558456815 podStartE2EDuration="2.558456815s" podCreationTimestamp="2026-01-30 08:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:32.553592753 +0000 UTC m=+1211.249139862" watchObservedRunningTime="2026-01-30 08:29:32.558456815 +0000 UTC m=+1211.254003924" Jan 30 08:29:33 crc kubenswrapper[4870]: I0130 08:29:33.544366 4870 generic.go:334] "Generic (PLEG): container finished" podID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" exitCode=1 Jan 30 08:29:33 crc kubenswrapper[4870]: I0130 08:29:33.544475 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerDied","Data":"bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba"} Jan 30 08:29:33 crc kubenswrapper[4870]: I0130 08:29:33.544871 4870 scope.go:117] "RemoveContainer" containerID="1368b589787d7b14188bdd5cbf6d5c41177fec471c90618303e481623b136b42" Jan 30 08:29:33 crc kubenswrapper[4870]: I0130 08:29:33.546123 4870 scope.go:117] "RemoveContainer" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" Jan 30 08:29:33 crc kubenswrapper[4870]: E0130 08:29:33.546390 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8628af25-d5e4-46a0-adec-4c25ca39676b)\"" pod="openstack/watcher-decision-engine-0" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.096672 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.117455 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-central-agent" containerID="cri-o://a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36" gracePeriod=30 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.117681 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="sg-core" containerID="cri-o://d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08" gracePeriod=30 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.117671 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="proxy-httpd" containerID="cri-o://655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7" gracePeriod=30 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.117716 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-notification-agent" containerID="cri-o://532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac" gracePeriod=30 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.577706 4870 generic.go:334] "Generic (PLEG): container finished" podID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerID="655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7" exitCode=0 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.577741 4870 generic.go:334] "Generic (PLEG): container finished" podID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerID="d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08" exitCode=2 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.577767 4870 generic.go:334] "Generic (PLEG): container finished" podID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerID="a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36" exitCode=0 Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.577819 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerDied","Data":"655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7"} Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.577850 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerDied","Data":"d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08"} Jan 30 08:29:34 crc kubenswrapper[4870]: I0130 08:29:34.577863 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerDied","Data":"a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36"} Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.052267 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165060 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-sg-core-conf-yaml\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165123 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-log-httpd\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165206 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-run-httpd\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165238 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-combined-ca-bundle\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165279 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-config-data\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165312 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-scripts\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165339 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6jkw\" (UniqueName: \"kubernetes.io/projected/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-kube-api-access-q6jkw\") pod \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\" (UID: \"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f\") " Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.165910 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.167738 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.172310 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-kube-api-access-q6jkw" (OuterVolumeSpecName: "kube-api-access-q6jkw") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "kube-api-access-q6jkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.172313 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-scripts" (OuterVolumeSpecName: "scripts") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.219626 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.269073 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.269217 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.269228 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.269236 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.269245 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6jkw\" (UniqueName: \"kubernetes.io/projected/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-kube-api-access-q6jkw\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.309978 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-config-data" (OuterVolumeSpecName: "config-data") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.334013 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" (UID: "bebd196d-f8e4-466e-aa1f-99a65e3c7c6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.372317 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.372347 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.595065 4870 generic.go:334] "Generic (PLEG): container finished" podID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerID="532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac" exitCode=0 Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.595105 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerDied","Data":"532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac"} Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.595131 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bebd196d-f8e4-466e-aa1f-99a65e3c7c6f","Type":"ContainerDied","Data":"ea0f16d0885c4f830bf99a5da540d693451546a651ab72659db0f8c0dde59721"} Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.595148 4870 scope.go:117] "RemoveContainer" containerID="655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.595177 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.623002 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.623057 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.637397 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.644223 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.644260 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.670943 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682034 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:35 crc kubenswrapper[4870]: E0130 08:29:35.682500 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="proxy-httpd" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682514 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="proxy-httpd" Jan 30 08:29:35 crc kubenswrapper[4870]: E0130 08:29:35.682530 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="sg-core" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682538 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="sg-core" Jan 30 08:29:35 crc kubenswrapper[4870]: E0130 08:29:35.682565 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-notification-agent" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682571 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-notification-agent" Jan 30 08:29:35 crc kubenswrapper[4870]: E0130 08:29:35.682580 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-central-agent" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682586 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-central-agent" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682763 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-notification-agent" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682775 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="sg-core" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682788 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="ceilometer-central-agent" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.682808 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" containerName="proxy-httpd" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.684507 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.684609 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.687087 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.687561 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.697028 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.712829 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.739073 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.755503 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.781228 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.783370 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-config-data\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.783580 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-run-httpd\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.783732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-scripts\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.783863 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6ng\" (UniqueName: \"kubernetes.io/projected/e7b7679e-30f5-4f8a-96e0-a1581691242d-kube-api-access-kk6ng\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.783907 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.783990 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-log-httpd\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.834728 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885399 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-scripts\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885478 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6ng\" (UniqueName: \"kubernetes.io/projected/e7b7679e-30f5-4f8a-96e0-a1581691242d-kube-api-access-kk6ng\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885503 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885550 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-log-httpd\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885581 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-config-data\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.885679 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-run-httpd\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.888259 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-run-httpd\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.888561 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-log-httpd\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.890663 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.892005 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-config-data\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.892285 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-scripts\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.902339 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:35 crc kubenswrapper[4870]: I0130 08:29:35.902944 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6ng\" (UniqueName: \"kubernetes.io/projected/e7b7679e-30f5-4f8a-96e0-a1581691242d-kube-api-access-kk6ng\") pod \"ceilometer-0\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " pod="openstack/ceilometer-0" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.040109 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.090051 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd196d-f8e4-466e-aa1f-99a65e3c7c6f" path="/var/lib/kubelet/pods/bebd196d-f8e4-466e-aa1f-99a65e3c7c6f/volumes" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.516514 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.606360 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.606407 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.606530 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.607292 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:36 crc kubenswrapper[4870]: I0130 08:29:36.766011 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:37 crc kubenswrapper[4870]: I0130 08:29:37.229848 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5564cc7ccb-wnwrs" Jan 30 08:29:37 crc kubenswrapper[4870]: I0130 08:29:37.329283 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-ddfdbf76d-mfqhx"] Jan 30 08:29:37 crc kubenswrapper[4870]: I0130 08:29:37.329700 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api-log" containerID="cri-o://4bfee3b8db156e8f632e1b810fed5fff1f01f89361fe371e88524396b0b3e740" gracePeriod=30 Jan 30 08:29:37 crc kubenswrapper[4870]: I0130 08:29:37.330154 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api" containerID="cri-o://9a536688d050dc6432091788f5363412218a7bb425a3e180118973e359516afe" gracePeriod=30 Jan 30 08:29:37 crc kubenswrapper[4870]: I0130 08:29:37.630397 4870 generic.go:334] "Generic (PLEG): container finished" podID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerID="4bfee3b8db156e8f632e1b810fed5fff1f01f89361fe371e88524396b0b3e740" exitCode=143 Jan 30 08:29:37 crc kubenswrapper[4870]: I0130 08:29:37.631244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ddfdbf76d-mfqhx" event={"ID":"57a4731e-3232-4d27-acf8-9d34ee7570a7","Type":"ContainerDied","Data":"4bfee3b8db156e8f632e1b810fed5fff1f01f89361fe371e88524396b0b3e740"} Jan 30 08:29:38 crc kubenswrapper[4870]: I0130 08:29:38.217896 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:38 crc kubenswrapper[4870]: I0130 08:29:38.217938 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:38 crc kubenswrapper[4870]: I0130 08:29:38.218587 4870 scope.go:117] "RemoveContainer" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" Jan 30 08:29:38 crc kubenswrapper[4870]: E0130 08:29:38.218814 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8628af25-d5e4-46a0-adec-4c25ca39676b)\"" pod="openstack/watcher-decision-engine-0" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" Jan 30 08:29:38 crc kubenswrapper[4870]: I0130 08:29:38.640377 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:29:38 crc kubenswrapper[4870]: I0130 08:29:38.640709 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:29:39 crc kubenswrapper[4870]: I0130 08:29:39.259404 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": dial tcp 10.217.0.175:9311: connect: connection refused" Jan 30 08:29:39 crc kubenswrapper[4870]: I0130 08:29:39.259411 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": dial tcp 10.217.0.175:9311: connect: connection refused" Jan 30 08:29:39 crc kubenswrapper[4870]: I0130 08:29:39.653163 4870 generic.go:334] "Generic (PLEG): container finished" podID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerID="9a536688d050dc6432091788f5363412218a7bb425a3e180118973e359516afe" exitCode=0 Jan 30 08:29:39 crc kubenswrapper[4870]: I0130 08:29:39.653216 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ddfdbf76d-mfqhx" event={"ID":"57a4731e-3232-4d27-acf8-9d34ee7570a7","Type":"ContainerDied","Data":"9a536688d050dc6432091788f5363412218a7bb425a3e180118973e359516afe"} Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.385604 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.386094 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.386647 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.386735 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.542857 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.608643 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:29:40 crc kubenswrapper[4870]: I0130 08:29:40.948721 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:41 crc kubenswrapper[4870]: I0130 08:29:41.044927 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:41 crc kubenswrapper[4870]: I0130 08:29:41.047425 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-847c478677-wtndf" Jan 30 08:29:42 crc kubenswrapper[4870]: I0130 08:29:42.282376 4870 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podedd09a42-14b6-4161-ba2a-82c4cf4f5983"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podedd09a42-14b6-4161-ba2a-82c4cf4f5983] : Timed out while waiting for systemd to remove kubepods-besteffort-podedd09a42_14b6_4161_ba2a_82c4cf4f5983.slice" Jan 30 08:29:44 crc kubenswrapper[4870]: I0130 08:29:44.259551 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": dial tcp 10.217.0.175:9311: connect: connection refused" Jan 30 08:29:44 crc kubenswrapper[4870]: I0130 08:29:44.259558 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-ddfdbf76d-mfqhx" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": dial tcp 10.217.0.175:9311: connect: connection refused" Jan 30 08:29:44 crc kubenswrapper[4870]: I0130 08:29:44.751803 4870 generic.go:334] "Generic (PLEG): container finished" podID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerID="15d4837d5c345debdddee40f4775635705eb028fe7633d8e6d5c855f92746c7a" exitCode=137 Jan 30 08:29:44 crc kubenswrapper[4870]: I0130 08:29:44.751842 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c1333f8-2564-4b5c-84b9-0045d742c45f","Type":"ContainerDied","Data":"15d4837d5c345debdddee40f4775635705eb028fe7633d8e6d5c855f92746c7a"} Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.471406 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.471677 4870 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.471824 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.23:5001/podified-master-centos10/openstack-openstackclient:watcher_latest,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch687h599h88h5bdh5f9h54dh584h59fh649hb7h78h565h5f9hd5h664hdch69h65fh65h665h5d5h56h579hd9h679h54bh675hb7h5c7h6fh54q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvzdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(204a0d39-f7b0-4468-a82f-9fcc49fc1281): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.473282 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="204a0d39-f7b0-4468-a82f-9fcc49fc1281" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.493055 4870 scope.go:117] "RemoveContainer" containerID="d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.614712 4870 scope.go:117] "RemoveContainer" containerID="532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.654692 4870 scope.go:117] "RemoveContainer" containerID="a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.681810 4870 scope.go:117] "RemoveContainer" containerID="655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.682245 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7\": container with ID starting with 655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7 not found: ID does not exist" containerID="655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.682272 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7"} err="failed to get container status \"655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7\": rpc error: code = NotFound desc = could not find container \"655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7\": container with ID starting with 655c44a300fccee93f2efd7de792fed09bcb0d74830f54032964e7370e7ef3a7 not found: ID does not exist" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.682291 4870 scope.go:117] "RemoveContainer" containerID="d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.682626 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08\": container with ID starting with d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08 not found: ID does not exist" containerID="d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.682646 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08"} err="failed to get container status \"d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08\": rpc error: code = NotFound desc = could not find container \"d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08\": container with ID starting with d34d7c6bccea77a359da1387457bb8ca89ae48119f56d025bbc10a0703b7de08 not found: ID does not exist" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.682662 4870 scope.go:117] "RemoveContainer" containerID="532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.683291 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac\": container with ID starting with 532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac not found: ID does not exist" containerID="532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.683317 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac"} err="failed to get container status \"532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac\": rpc error: code = NotFound desc = could not find container \"532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac\": container with ID starting with 532ae0bccf3ad7a9098b323c71e99784c6b9da57d38020ef2075d91ed4925bac not found: ID does not exist" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.683334 4870 scope.go:117] "RemoveContainer" containerID="a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.683651 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36\": container with ID starting with a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36 not found: ID does not exist" containerID="a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36" Jan 30 08:29:45 crc kubenswrapper[4870]: I0130 08:29:45.683680 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36"} err="failed to get container status \"a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36\": rpc error: code = NotFound desc = could not find container \"a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36\": container with ID starting with a3e37bcca1f4eba205fa794e1aff9acbffd8742aa8f58f2fa6ca0b268fdeff36 not found: ID does not exist" Jan 30 08:29:45 crc kubenswrapper[4870]: E0130 08:29:45.765534 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.23:5001/podified-master-centos10/openstack-openstackclient:watcher_latest\\\"\"" pod="openstack/openstackclient" podUID="204a0d39-f7b0-4468-a82f-9fcc49fc1281" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:45.999951 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.007778 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.097465 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data-custom\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.097998 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-scripts\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098081 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-combined-ca-bundle\") pod \"57a4731e-3232-4d27-acf8-9d34ee7570a7\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098117 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data\") pod \"57a4731e-3232-4d27-acf8-9d34ee7570a7\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098148 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qdrz\" (UniqueName: \"kubernetes.io/projected/57a4731e-3232-4d27-acf8-9d34ee7570a7-kube-api-access-2qdrz\") pod \"57a4731e-3232-4d27-acf8-9d34ee7570a7\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098212 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a4731e-3232-4d27-acf8-9d34ee7570a7-logs\") pod \"57a4731e-3232-4d27-acf8-9d34ee7570a7\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098231 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1333f8-2564-4b5c-84b9-0045d742c45f-logs\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098303 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data-custom\") pod \"57a4731e-3232-4d27-acf8-9d34ee7570a7\" (UID: \"57a4731e-3232-4d27-acf8-9d34ee7570a7\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098323 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c1333f8-2564-4b5c-84b9-0045d742c45f-etc-machine-id\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098343 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhtln\" (UniqueName: \"kubernetes.io/projected/2c1333f8-2564-4b5c-84b9-0045d742c45f-kube-api-access-xhtln\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098384 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.098406 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-combined-ca-bundle\") pod \"2c1333f8-2564-4b5c-84b9-0045d742c45f\" (UID: \"2c1333f8-2564-4b5c-84b9-0045d742c45f\") " Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.100067 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a4731e-3232-4d27-acf8-9d34ee7570a7-logs" (OuterVolumeSpecName: "logs") pod "57a4731e-3232-4d27-acf8-9d34ee7570a7" (UID: "57a4731e-3232-4d27-acf8-9d34ee7570a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.100696 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1333f8-2564-4b5c-84b9-0045d742c45f-logs" (OuterVolumeSpecName: "logs") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.101345 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1333f8-2564-4b5c-84b9-0045d742c45f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.106655 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a4731e-3232-4d27-acf8-9d34ee7570a7-kube-api-access-2qdrz" (OuterVolumeSpecName: "kube-api-access-2qdrz") pod "57a4731e-3232-4d27-acf8-9d34ee7570a7" (UID: "57a4731e-3232-4d27-acf8-9d34ee7570a7"). InnerVolumeSpecName "kube-api-access-2qdrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.108148 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.108546 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57a4731e-3232-4d27-acf8-9d34ee7570a7" (UID: "57a4731e-3232-4d27-acf8-9d34ee7570a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: W0130 08:29:46.111830 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7b7679e_30f5_4f8a_96e0_a1581691242d.slice/crio-97c0147556c32e49c5c20b79bd0b3e0fe461928cc98a023b7da24b4aa033c569 WatchSource:0}: Error finding container 97c0147556c32e49c5c20b79bd0b3e0fe461928cc98a023b7da24b4aa033c569: Status 404 returned error can't find the container with id 97c0147556c32e49c5c20b79bd0b3e0fe461928cc98a023b7da24b4aa033c569 Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.113222 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1333f8-2564-4b5c-84b9-0045d742c45f-kube-api-access-xhtln" (OuterVolumeSpecName: "kube-api-access-xhtln") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "kube-api-access-xhtln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.119184 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.122261 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.135575 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.145965 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-scripts" (OuterVolumeSpecName: "scripts") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.175881 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57a4731e-3232-4d27-acf8-9d34ee7570a7" (UID: "57a4731e-3232-4d27-acf8-9d34ee7570a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.177968 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data" (OuterVolumeSpecName: "config-data") pod "2c1333f8-2564-4b5c-84b9-0045d742c45f" (UID: "2c1333f8-2564-4b5c-84b9-0045d742c45f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.185266 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data" (OuterVolumeSpecName: "config-data") pod "57a4731e-3232-4d27-acf8-9d34ee7570a7" (UID: "57a4731e-3232-4d27-acf8-9d34ee7570a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200359 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qdrz\" (UniqueName: \"kubernetes.io/projected/57a4731e-3232-4d27-acf8-9d34ee7570a7-kube-api-access-2qdrz\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200388 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a4731e-3232-4d27-acf8-9d34ee7570a7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200398 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1333f8-2564-4b5c-84b9-0045d742c45f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200407 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200416 4870 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c1333f8-2564-4b5c-84b9-0045d742c45f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200425 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhtln\" (UniqueName: \"kubernetes.io/projected/2c1333f8-2564-4b5c-84b9-0045d742c45f-kube-api-access-xhtln\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200433 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200441 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200449 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200457 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1333f8-2564-4b5c-84b9-0045d742c45f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200466 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.200478 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a4731e-3232-4d27-acf8-9d34ee7570a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.786993 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerStarted","Data":"d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4"} Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.787383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerStarted","Data":"50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3"} Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.787402 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerStarted","Data":"97c0147556c32e49c5c20b79bd0b3e0fe461928cc98a023b7da24b4aa033c569"} Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.789416 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c1333f8-2564-4b5c-84b9-0045d742c45f","Type":"ContainerDied","Data":"5eca3df12794c5b43fdb77c898c9bd28c39f3103bd50eb3571fc088c025d0cf9"} Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.789454 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.789470 4870 scope.go:117] "RemoveContainer" containerID="15d4837d5c345debdddee40f4775635705eb028fe7633d8e6d5c855f92746c7a" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.793736 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ddfdbf76d-mfqhx" event={"ID":"57a4731e-3232-4d27-acf8-9d34ee7570a7","Type":"ContainerDied","Data":"4fefb5421067779e4ffb7448501feacbbd8e1262345c29ebcf35ade1e4bf9f85"} Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.793791 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ddfdbf76d-mfqhx" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.820054 4870 scope.go:117] "RemoveContainer" containerID="135b22419515f0c37fa07d5cae62ea43515dc6460498408338173f7df4e2361b" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.839938 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-ddfdbf76d-mfqhx"] Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.852949 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-ddfdbf76d-mfqhx"] Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.855647 4870 scope.go:117] "RemoveContainer" containerID="9a536688d050dc6432091788f5363412218a7bb425a3e180118973e359516afe" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.862386 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.872250 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883089 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:46 crc kubenswrapper[4870]: E0130 08:29:46.883536 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api-log" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883556 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api-log" Jan 30 08:29:46 crc kubenswrapper[4870]: E0130 08:29:46.883572 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883579 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api" Jan 30 08:29:46 crc kubenswrapper[4870]: E0130 08:29:46.883601 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api-log" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883608 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api-log" Jan 30 08:29:46 crc kubenswrapper[4870]: E0130 08:29:46.883627 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883634 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883826 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api-log" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883847 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api-log" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883858 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" containerName="barbican-api" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.883868 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" containerName="cinder-api" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.885146 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.889202 4870 scope.go:117] "RemoveContainer" containerID="4bfee3b8db156e8f632e1b810fed5fff1f01f89361fe371e88524396b0b3e740" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.889419 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.889453 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.889546 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 08:29:46 crc kubenswrapper[4870]: I0130 08:29:46.896575 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.024775 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.024832 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-logs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.024957 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-scripts\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.024973 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.025006 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.025031 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.025095 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6hx\" (UniqueName: \"kubernetes.io/projected/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-kube-api-access-2k6hx\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.025154 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-config-data\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.025177 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127252 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-config-data\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127357 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127376 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-logs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127424 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-scripts\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127437 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127471 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127497 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.127550 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k6hx\" (UniqueName: \"kubernetes.io/projected/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-kube-api-access-2k6hx\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.128214 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.128516 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-logs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.132751 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.133677 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.133724 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-config-data\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.135441 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.138488 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-scripts\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.142388 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.147263 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k6hx\" (UniqueName: \"kubernetes.io/projected/dcb916a9-c812-4e35-91d2-a4cc4ef78fc3-kube-api-access-2k6hx\") pod \"cinder-api-0\" (UID: \"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3\") " pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.235753 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.700386 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.808373 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d69bf9957-gj6dt" Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.811318 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3","Type":"ContainerStarted","Data":"8c3a38440e961862f41286f49f7349a9380fb90dba8ff37905275bcfa07ea8ce"} Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.817797 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerStarted","Data":"fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8"} Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.872172 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f966fd88d-sdpcn"] Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.872441 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f966fd88d-sdpcn" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-api" containerID="cri-o://b1d3aae9bf64c7d5adb6c7a0c0cef4cbd05ceee79bf2bdc26b1676e0ef8ac7ff" gracePeriod=30 Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.872826 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f966fd88d-sdpcn" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-httpd" containerID="cri-o://8e106b0c6b2ed513250f13c043895b69dbe1cd77d36b5ecd4e47e2f2226b112e" gracePeriod=30 Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.987947 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.988268 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-log" containerID="cri-o://b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe" gracePeriod=30 Jan 30 08:29:47 crc kubenswrapper[4870]: I0130 08:29:47.988842 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-httpd" containerID="cri-o://5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5" gracePeriod=30 Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.105068 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1333f8-2564-4b5c-84b9-0045d742c45f" path="/var/lib/kubelet/pods/2c1333f8-2564-4b5c-84b9-0045d742c45f/volumes" Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.107110 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a4731e-3232-4d27-acf8-9d34ee7570a7" path="/var/lib/kubelet/pods/57a4731e-3232-4d27-acf8-9d34ee7570a7/volumes" Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.843548 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3","Type":"ContainerStarted","Data":"82245963163f3797bd8db1de7522dea3a555555b0f7844f58138397bc618217c"} Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.859734 4870 generic.go:334] "Generic (PLEG): container finished" podID="01e7af93-8480-4484-9558-5455eb00fa2b" containerID="b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe" exitCode=143 Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.860057 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e7af93-8480-4484-9558-5455eb00fa2b","Type":"ContainerDied","Data":"b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe"} Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.865752 4870 generic.go:334] "Generic (PLEG): container finished" podID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerID="8e106b0c6b2ed513250f13c043895b69dbe1cd77d36b5ecd4e47e2f2226b112e" exitCode=0 Jan 30 08:29:48 crc kubenswrapper[4870]: I0130 08:29:48.865797 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f966fd88d-sdpcn" event={"ID":"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88","Type":"ContainerDied","Data":"8e106b0c6b2ed513250f13c043895b69dbe1cd77d36b5ecd4e47e2f2226b112e"} Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.366254 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482559 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-logs\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482614 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-combined-ca-bundle\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482640 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482697 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-httpd-run\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482771 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp296\" (UniqueName: \"kubernetes.io/projected/01e7af93-8480-4484-9558-5455eb00fa2b-kube-api-access-bp296\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482837 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-public-tls-certs\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482866 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-config-data\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.482942 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-scripts\") pod \"01e7af93-8480-4484-9558-5455eb00fa2b\" (UID: \"01e7af93-8480-4484-9558-5455eb00fa2b\") " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.483226 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-logs" (OuterVolumeSpecName: "logs") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.483548 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.483593 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.488137 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.488685 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-scripts" (OuterVolumeSpecName: "scripts") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.503057 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e7af93-8480-4484-9558-5455eb00fa2b-kube-api-access-bp296" (OuterVolumeSpecName: "kube-api-access-bp296") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "kube-api-access-bp296". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.536390 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.538495 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-config-data" (OuterVolumeSpecName: "config-data") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.560077 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "01e7af93-8480-4484-9558-5455eb00fa2b" (UID: "01e7af93-8480-4484-9558-5455eb00fa2b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585044 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp296\" (UniqueName: \"kubernetes.io/projected/01e7af93-8480-4484-9558-5455eb00fa2b-kube-api-access-bp296\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585076 4870 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585086 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585094 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585102 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e7af93-8480-4484-9558-5455eb00fa2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585134 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.585143 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e7af93-8480-4484-9558-5455eb00fa2b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.603414 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.686731 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.784942 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.785187 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-log" containerID="cri-o://2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1" gracePeriod=30 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.785291 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-httpd" containerID="cri-o://7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18" gracePeriod=30 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.883692 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerStarted","Data":"ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249"} Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.883793 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-central-agent" containerID="cri-o://50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3" gracePeriod=30 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.883830 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.883831 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="sg-core" containerID="cri-o://fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8" gracePeriod=30 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.883865 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-notification-agent" containerID="cri-o://d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4" gracePeriod=30 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.883891 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="proxy-httpd" containerID="cri-o://ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249" gracePeriod=30 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.897251 4870 generic.go:334] "Generic (PLEG): container finished" podID="01e7af93-8480-4484-9558-5455eb00fa2b" containerID="5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5" exitCode=0 Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.897312 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e7af93-8480-4484-9558-5455eb00fa2b","Type":"ContainerDied","Data":"5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5"} Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.897337 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e7af93-8480-4484-9558-5455eb00fa2b","Type":"ContainerDied","Data":"eda82349da897444026066edb7a8f71a1933756e2aef786c074692bf323e90ef"} Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.897354 4870 scope.go:117] "RemoveContainer" containerID="5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.897448 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.926937 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dcb916a9-c812-4e35-91d2-a4cc4ef78fc3","Type":"ContainerStarted","Data":"51db07cabbb6030fef39307c721258594aeda04685efcd0196b6f2f9126031a1"} Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.927142 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.948454 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.008972996 podStartE2EDuration="14.948437651s" podCreationTimestamp="2026-01-30 08:29:35 +0000 UTC" firstStartedPulling="2026-01-30 08:29:46.122024559 +0000 UTC m=+1224.817571668" lastFinishedPulling="2026-01-30 08:29:49.061489214 +0000 UTC m=+1227.757036323" observedRunningTime="2026-01-30 08:29:49.911737537 +0000 UTC m=+1228.607284646" watchObservedRunningTime="2026-01-30 08:29:49.948437651 +0000 UTC m=+1228.643984760" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.949450 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.9494441829999998 podStartE2EDuration="3.949444183s" podCreationTimestamp="2026-01-30 08:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:49.946039586 +0000 UTC m=+1228.641586695" watchObservedRunningTime="2026-01-30 08:29:49.949444183 +0000 UTC m=+1228.644991292" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.972445 4870 scope.go:117] "RemoveContainer" containerID="b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe" Jan 30 08:29:49 crc kubenswrapper[4870]: I0130 08:29:49.979983 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.010965 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.025090 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:50 crc kubenswrapper[4870]: E0130 08:29:50.026162 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-httpd" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.026184 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-httpd" Jan 30 08:29:50 crc kubenswrapper[4870]: E0130 08:29:50.026216 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-log" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.026224 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-log" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.027609 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-log" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.037964 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" containerName="glance-httpd" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.040782 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.044528 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.057528 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.057758 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.058236 4870 scope.go:117] "RemoveContainer" containerID="5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5" Jan 30 08:29:50 crc kubenswrapper[4870]: E0130 08:29:50.068340 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5\": container with ID starting with 5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5 not found: ID does not exist" containerID="5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.068438 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5"} err="failed to get container status \"5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5\": rpc error: code = NotFound desc = could not find container \"5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5\": container with ID starting with 5de813254396c24f18fe9fd8238b591bddeb0213951b91ac54dfc76965423fe5 not found: ID does not exist" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.068495 4870 scope.go:117] "RemoveContainer" containerID="b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe" Jan 30 08:29:50 crc kubenswrapper[4870]: E0130 08:29:50.075551 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe\": container with ID starting with b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe not found: ID does not exist" containerID="b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.075594 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe"} err="failed to get container status \"b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe\": rpc error: code = NotFound desc = could not find container \"b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe\": container with ID starting with b7338a02d0d2eab98544b50920230e4bcceedd83d4acefe6a39ec62e93edaabe not found: ID does not exist" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.119466 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e7af93-8480-4484-9558-5455eb00fa2b" path="/var/lib/kubelet/pods/01e7af93-8480-4484-9558-5455eb00fa2b/volumes" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127084 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127161 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127201 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmh9s\" (UniqueName: \"kubernetes.io/projected/743b8276-eb2e-49fa-b493-fb83f20837ed-kube-api-access-fmh9s\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127267 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/743b8276-eb2e-49fa-b493-fb83f20837ed-logs\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127296 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127379 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127521 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/743b8276-eb2e-49fa-b493-fb83f20837ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.127551 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.228811 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.228931 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/743b8276-eb2e-49fa-b493-fb83f20837ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.228954 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.229015 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.229046 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.229082 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmh9s\" (UniqueName: \"kubernetes.io/projected/743b8276-eb2e-49fa-b493-fb83f20837ed-kube-api-access-fmh9s\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.229125 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/743b8276-eb2e-49fa-b493-fb83f20837ed-logs\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.229150 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.230228 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/743b8276-eb2e-49fa-b493-fb83f20837ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.230418 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.230674 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/743b8276-eb2e-49fa-b493-fb83f20837ed-logs\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.234266 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.236466 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.236696 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.237444 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/743b8276-eb2e-49fa-b493-fb83f20837ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.247570 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmh9s\" (UniqueName: \"kubernetes.io/projected/743b8276-eb2e-49fa-b493-fb83f20837ed-kube-api-access-fmh9s\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.264356 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"743b8276-eb2e-49fa-b493-fb83f20837ed\") " pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.438776 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.938321 4870 generic.go:334] "Generic (PLEG): container finished" podID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerID="b1d3aae9bf64c7d5adb6c7a0c0cef4cbd05ceee79bf2bdc26b1676e0ef8ac7ff" exitCode=0 Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.938492 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f966fd88d-sdpcn" event={"ID":"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88","Type":"ContainerDied","Data":"b1d3aae9bf64c7d5adb6c7a0c0cef4cbd05ceee79bf2bdc26b1676e0ef8ac7ff"} Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.938730 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f966fd88d-sdpcn" event={"ID":"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88","Type":"ContainerDied","Data":"cfedb81ba7d9e195fe41ff8a768c117183039fb6da240c26ec467012add1460c"} Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.938742 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfedb81ba7d9e195fe41ff8a768c117183039fb6da240c26ec467012add1460c" Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.940637 4870 generic.go:334] "Generic (PLEG): container finished" podID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerID="2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1" exitCode=143 Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.940713 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65bb64e7-45f2-4b8d-94f0-34c21ac75042","Type":"ContainerDied","Data":"2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1"} Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.943417 4870 generic.go:334] "Generic (PLEG): container finished" podID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerID="ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249" exitCode=0 Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.943438 4870 generic.go:334] "Generic (PLEG): container finished" podID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerID="fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8" exitCode=2 Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.943446 4870 generic.go:334] "Generic (PLEG): container finished" podID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerID="d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4" exitCode=0 Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.943440 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerDied","Data":"ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249"} Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.943478 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerDied","Data":"fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8"} Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.943495 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerDied","Data":"d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4"} Jan 30 08:29:50 crc kubenswrapper[4870]: I0130 08:29:50.962252 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.046586 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-httpd-config\") pod \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.046651 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-config\") pod \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.046794 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-ovndb-tls-certs\") pod \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.046834 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8xs7\" (UniqueName: \"kubernetes.io/projected/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-kube-api-access-z8xs7\") pod \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.046895 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-combined-ca-bundle\") pod \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\" (UID: \"cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88\") " Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.058006 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" (UID: "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.074459 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-kube-api-access-z8xs7" (OuterVolumeSpecName: "kube-api-access-z8xs7") pod "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" (UID: "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88"). InnerVolumeSpecName "kube-api-access-z8xs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.074970 4870 scope.go:117] "RemoveContainer" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" Jan 30 08:29:51 crc kubenswrapper[4870]: E0130 08:29:51.075305 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8628af25-d5e4-46a0-adec-4c25ca39676b)\"" pod="openstack/watcher-decision-engine-0" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.102426 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-config" (OuterVolumeSpecName: "config") pod "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" (UID: "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.126549 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" (UID: "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.131313 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" (UID: "cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.149136 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8xs7\" (UniqueName: \"kubernetes.io/projected/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-kube-api-access-z8xs7\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.149166 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.149175 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.149184 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.149192 4870 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.169213 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.957924 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.959505 4870 generic.go:334] "Generic (PLEG): container finished" podID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerID="7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18" exitCode=0 Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.959567 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65bb64e7-45f2-4b8d-94f0-34c21ac75042","Type":"ContainerDied","Data":"7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18"} Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.959594 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65bb64e7-45f2-4b8d-94f0-34c21ac75042","Type":"ContainerDied","Data":"efd4740849ad794d4e57051943e44782204bcc76846a706c3d143892f1a69cb3"} Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.959612 4870 scope.go:117] "RemoveContainer" containerID="7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.961637 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f966fd88d-sdpcn" Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.962950 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"743b8276-eb2e-49fa-b493-fb83f20837ed","Type":"ContainerStarted","Data":"7045cb5ae41f1b9872cd811da064658bad7a81f498adc2ec631312931ad8e707"} Jan 30 08:29:51 crc kubenswrapper[4870]: I0130 08:29:51.962993 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"743b8276-eb2e-49fa-b493-fb83f20837ed","Type":"ContainerStarted","Data":"8841e1e2b84975d36b7273d680e634b70dead257ee514476a271e3ed38497b8b"} Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.016686 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f966fd88d-sdpcn"] Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.017990 4870 scope.go:117] "RemoveContainer" containerID="2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.022731 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f966fd88d-sdpcn"] Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069440 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-scripts\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069571 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-config-data\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069611 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-httpd-run\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069680 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069748 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-logs\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069805 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-combined-ca-bundle\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069841 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-internal-tls-certs\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.069890 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46d96\" (UniqueName: \"kubernetes.io/projected/65bb64e7-45f2-4b8d-94f0-34c21ac75042-kube-api-access-46d96\") pod \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\" (UID: \"65bb64e7-45f2-4b8d-94f0-34c21ac75042\") " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.071420 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-logs" (OuterVolumeSpecName: "logs") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.071694 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.075484 4870 scope.go:117] "RemoveContainer" containerID="7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18" Jan 30 08:29:52 crc kubenswrapper[4870]: E0130 08:29:52.079012 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18\": container with ID starting with 7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18 not found: ID does not exist" containerID="7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.079137 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18"} err="failed to get container status \"7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18\": rpc error: code = NotFound desc = could not find container \"7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18\": container with ID starting with 7626fef9e744bdf23c5143f0c48395b24bab8239f9cfdb06207150e977be0d18 not found: ID does not exist" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.079220 4870 scope.go:117] "RemoveContainer" containerID="2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1" Jan 30 08:29:52 crc kubenswrapper[4870]: E0130 08:29:52.079610 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1\": container with ID starting with 2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1 not found: ID does not exist" containerID="2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.079694 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1"} err="failed to get container status \"2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1\": rpc error: code = NotFound desc = could not find container \"2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1\": container with ID starting with 2b63187d02cf3f1e134d09ad354fab4f72d729ac96dbea12fd138ec6e9c9f8b1 not found: ID does not exist" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.082730 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.084045 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-scripts" (OuterVolumeSpecName: "scripts") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.101435 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65bb64e7-45f2-4b8d-94f0-34c21ac75042-kube-api-access-46d96" (OuterVolumeSpecName: "kube-api-access-46d96") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "kube-api-access-46d96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.108300 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" path="/var/lib/kubelet/pods/cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88/volumes" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.111080 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.129674 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-config-data" (OuterVolumeSpecName: "config-data") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.157753 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "65bb64e7-45f2-4b8d-94f0-34c21ac75042" (UID: "65bb64e7-45f2-4b8d-94f0-34c21ac75042"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.172991 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173030 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173049 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173059 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65bb64e7-45f2-4b8d-94f0-34c21ac75042-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173067 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173079 4870 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173088 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46d96\" (UniqueName: \"kubernetes.io/projected/65bb64e7-45f2-4b8d-94f0-34c21ac75042-kube-api-access-46d96\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.173096 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65bb64e7-45f2-4b8d-94f0-34c21ac75042-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.192155 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.275079 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.972195 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"743b8276-eb2e-49fa-b493-fb83f20837ed","Type":"ContainerStarted","Data":"68bb452d12c7f1b5a44e23d8dc4ff7ac7ee1407e95b1974d05face87ce2a046d"} Jan 30 08:29:52 crc kubenswrapper[4870]: I0130 08:29:52.973841 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.008353 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.008333909 podStartE2EDuration="4.008333909s" podCreationTimestamp="2026-01-30 08:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:52.995245329 +0000 UTC m=+1231.690792438" watchObservedRunningTime="2026-01-30 08:29:53.008333909 +0000 UTC m=+1231.703881018" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.018909 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.027181 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.035714 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:53 crc kubenswrapper[4870]: E0130 08:29:53.040774 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-httpd" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.040997 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-httpd" Jan 30 08:29:53 crc kubenswrapper[4870]: E0130 08:29:53.041151 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-httpd" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.041232 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-httpd" Jan 30 08:29:53 crc kubenswrapper[4870]: E0130 08:29:53.041317 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-log" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.041384 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-log" Jan 30 08:29:53 crc kubenswrapper[4870]: E0130 08:29:53.041479 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-api" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.041553 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-api" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.041988 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-httpd" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.042063 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd6b80f-bafd-4b52-acb3-da3ff3ac5e88" containerName="neutron-api" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.042136 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-httpd" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.042214 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" containerName="glance-log" Jan 30 08:29:53 crc kubenswrapper[4870]: E0130 08:29:53.041984 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65bb64e7_45f2_4b8d_94f0_34c21ac75042.slice/crio-efd4740849ad794d4e57051943e44782204bcc76846a706c3d143892f1a69cb3\": RecentStats: unable to find data in memory cache]" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.043367 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.047418 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.048303 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.060795 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092241 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092315 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092406 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092456 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2efb8d24-a358-43df-af27-d74c4cf88e1f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092677 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092746 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092858 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efb8d24-a358-43df-af27-d74c4cf88e1f-logs\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.092932 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zc6\" (UniqueName: \"kubernetes.io/projected/2efb8d24-a358-43df-af27-d74c4cf88e1f-kube-api-access-l7zc6\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194244 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efb8d24-a358-43df-af27-d74c4cf88e1f-logs\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194295 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zc6\" (UniqueName: \"kubernetes.io/projected/2efb8d24-a358-43df-af27-d74c4cf88e1f-kube-api-access-l7zc6\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194335 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194353 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194379 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194397 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2efb8d24-a358-43df-af27-d74c4cf88e1f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194461 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.194488 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.195524 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efb8d24-a358-43df-af27-d74c4cf88e1f-logs\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.196108 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2efb8d24-a358-43df-af27-d74c4cf88e1f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.196108 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.200278 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.201232 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.203331 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.211333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efb8d24-a358-43df-af27-d74c4cf88e1f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.224811 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zc6\" (UniqueName: \"kubernetes.io/projected/2efb8d24-a358-43df-af27-d74c4cf88e1f-kube-api-access-l7zc6\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.239419 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2efb8d24-a358-43df-af27-d74c4cf88e1f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.374846 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:29:53 crc kubenswrapper[4870]: I0130 08:29:53.954734 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:29:54 crc kubenswrapper[4870]: I0130 08:29:54.000350 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2efb8d24-a358-43df-af27-d74c4cf88e1f","Type":"ContainerStarted","Data":"d3081f983b11573196c6784a19ff207cfa84c01fc7f7702be98c15820b68e8a1"} Jan 30 08:29:54 crc kubenswrapper[4870]: I0130 08:29:54.135939 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65bb64e7-45f2-4b8d-94f0-34c21ac75042" path="/var/lib/kubelet/pods/65bb64e7-45f2-4b8d-94f0-34c21ac75042/volumes" Jan 30 08:29:55 crc kubenswrapper[4870]: I0130 08:29:55.011359 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2efb8d24-a358-43df-af27-d74c4cf88e1f","Type":"ContainerStarted","Data":"dd93c75b57f31e2a7057630e58ff1cfb8d29f30e84496e1c78b444d92467d133"} Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.006938 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.023700 4870 generic.go:334] "Generic (PLEG): container finished" podID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerID="50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3" exitCode=0 Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.023759 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.023775 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerDied","Data":"50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3"} Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.023802 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7b7679e-30f5-4f8a-96e0-a1581691242d","Type":"ContainerDied","Data":"97c0147556c32e49c5c20b79bd0b3e0fe461928cc98a023b7da24b4aa033c569"} Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.023818 4870 scope.go:117] "RemoveContainer" containerID="ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.025935 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2efb8d24-a358-43df-af27-d74c4cf88e1f","Type":"ContainerStarted","Data":"ae44013a1a049c729f8672b13eccebed76b78ad97103dcc1ce8e359e600fb29e"} Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.045660 4870 scope.go:117] "RemoveContainer" containerID="fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057226 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-run-httpd\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057376 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk6ng\" (UniqueName: \"kubernetes.io/projected/e7b7679e-30f5-4f8a-96e0-a1581691242d-kube-api-access-kk6ng\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057409 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-config-data\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057522 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-sg-core-conf-yaml\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057556 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-combined-ca-bundle\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057584 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-log-httpd\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057668 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-scripts\") pod \"e7b7679e-30f5-4f8a-96e0-a1581691242d\" (UID: \"e7b7679e-30f5-4f8a-96e0-a1581691242d\") " Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.057700 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.058161 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.058255 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.058355 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7b7679e-30f5-4f8a-96e0-a1581691242d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.072743 4870 scope.go:117] "RemoveContainer" containerID="d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.081196 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.081178986 podStartE2EDuration="3.081178986s" podCreationTimestamp="2026-01-30 08:29:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:56.073803943 +0000 UTC m=+1234.769351052" watchObservedRunningTime="2026-01-30 08:29:56.081178986 +0000 UTC m=+1234.776726085" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.086128 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b7679e-30f5-4f8a-96e0-a1581691242d-kube-api-access-kk6ng" (OuterVolumeSpecName: "kube-api-access-kk6ng") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "kube-api-access-kk6ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.088087 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-scripts" (OuterVolumeSpecName: "scripts") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.095333 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.152199 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.160669 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk6ng\" (UniqueName: \"kubernetes.io/projected/e7b7679e-30f5-4f8a-96e0-a1581691242d-kube-api-access-kk6ng\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.160699 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.160708 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.160717 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.172982 4870 scope.go:117] "RemoveContainer" containerID="50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.199070 4870 scope.go:117] "RemoveContainer" containerID="ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.199492 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249\": container with ID starting with ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249 not found: ID does not exist" containerID="ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.199535 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249"} err="failed to get container status \"ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249\": rpc error: code = NotFound desc = could not find container \"ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249\": container with ID starting with ea2dc3bf6b01b6486bc1be0495050fed1248c3c10f9f393da82381c4f2c83249 not found: ID does not exist" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.199564 4870 scope.go:117] "RemoveContainer" containerID="fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.199839 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8\": container with ID starting with fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8 not found: ID does not exist" containerID="fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.199892 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8"} err="failed to get container status \"fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8\": rpc error: code = NotFound desc = could not find container \"fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8\": container with ID starting with fb611dcaa5785bae71aa5640bfd419f130d7c82c2c11c264d4870922c0ea5ab8 not found: ID does not exist" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.199915 4870 scope.go:117] "RemoveContainer" containerID="d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.200128 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4\": container with ID starting with d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4 not found: ID does not exist" containerID="d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.200155 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4"} err="failed to get container status \"d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4\": rpc error: code = NotFound desc = could not find container \"d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4\": container with ID starting with d0c9d684e4c518328cada1bb39aa6ed8b3369b2dc238ef8d275c56c25fcc12f4 not found: ID does not exist" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.200171 4870 scope.go:117] "RemoveContainer" containerID="50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.200527 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3\": container with ID starting with 50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3 not found: ID does not exist" containerID="50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.200558 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3"} err="failed to get container status \"50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3\": rpc error: code = NotFound desc = could not find container \"50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3\": container with ID starting with 50cd0eb040bd3e32d35994ee8e1046627f6c90e1405d0d6272ca90428017a5c3 not found: ID does not exist" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.205933 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-config-data" (OuterVolumeSpecName: "config-data") pod "e7b7679e-30f5-4f8a-96e0-a1581691242d" (UID: "e7b7679e-30f5-4f8a-96e0-a1581691242d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.262445 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7679e-30f5-4f8a-96e0-a1581691242d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.357172 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.371713 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381032 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.381453 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-notification-agent" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381471 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-notification-agent" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.381519 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-central-agent" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381527 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-central-agent" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.381544 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="sg-core" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381550 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="sg-core" Jan 30 08:29:56 crc kubenswrapper[4870]: E0130 08:29:56.381571 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="proxy-httpd" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381577 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="proxy-httpd" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381800 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="proxy-httpd" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381812 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-notification-agent" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381824 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="ceilometer-central-agent" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.381834 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" containerName="sg-core" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.383559 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.386908 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.387217 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.394978 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466386 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466476 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-scripts\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466528 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-run-httpd\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvh7f\" (UniqueName: \"kubernetes.io/projected/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-kube-api-access-dvh7f\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466636 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-log-httpd\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466687 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.466718 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-config-data\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.568820 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-log-httpd\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.568906 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.568933 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-config-data\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.568991 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.569032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-scripts\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.569069 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-run-httpd\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.569105 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvh7f\" (UniqueName: \"kubernetes.io/projected/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-kube-api-access-dvh7f\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.569404 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-log-httpd\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.569435 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-run-httpd\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.573312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.575035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-config-data\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.576072 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-scripts\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.577494 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.587349 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvh7f\" (UniqueName: \"kubernetes.io/projected/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-kube-api-access-dvh7f\") pod \"ceilometer-0\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.677206 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-rxztf"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.678936 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.692438 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-rxztf"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.709495 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.758086 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-p626s"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.760423 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.779207 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0467c513-d47e-4251-a042-74a1f0a3ba8e-operator-scripts\") pod \"nova-api-db-create-rxztf\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.779478 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8kz\" (UniqueName: \"kubernetes.io/projected/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-kube-api-access-hn8kz\") pod \"nova-cell0-db-create-p626s\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.779594 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkht5\" (UniqueName: \"kubernetes.io/projected/0467c513-d47e-4251-a042-74a1f0a3ba8e-kube-api-access-mkht5\") pod \"nova-api-db-create-rxztf\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.779714 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-operator-scripts\") pod \"nova-cell0-db-create-p626s\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.841822 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-p626s"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.881216 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkht5\" (UniqueName: \"kubernetes.io/projected/0467c513-d47e-4251-a042-74a1f0a3ba8e-kube-api-access-mkht5\") pod \"nova-api-db-create-rxztf\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.881273 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-operator-scripts\") pod \"nova-cell0-db-create-p626s\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.881378 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0467c513-d47e-4251-a042-74a1f0a3ba8e-operator-scripts\") pod \"nova-api-db-create-rxztf\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.881400 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8kz\" (UniqueName: \"kubernetes.io/projected/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-kube-api-access-hn8kz\") pod \"nova-cell0-db-create-p626s\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.890318 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-operator-scripts\") pod \"nova-cell0-db-create-p626s\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.891114 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0467c513-d47e-4251-a042-74a1f0a3ba8e-operator-scripts\") pod \"nova-api-db-create-rxztf\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.944990 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8kz\" (UniqueName: \"kubernetes.io/projected/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-kube-api-access-hn8kz\") pod \"nova-cell0-db-create-p626s\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.949495 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkht5\" (UniqueName: \"kubernetes.io/projected/0467c513-d47e-4251-a042-74a1f0a3ba8e-kube-api-access-mkht5\") pod \"nova-api-db-create-rxztf\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.966013 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-89bf-account-create-update-s9p8t"] Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.967137 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.970282 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 08:29:56 crc kubenswrapper[4870]: I0130 08:29:56.990956 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89bf-account-create-update-s9p8t"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.000976 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mz9qm"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.002435 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.002898 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rxztf" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.028200 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mz9qm"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.088032 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7c52-account-create-update-bc4lx"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.089344 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.091052 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtlz\" (UniqueName: \"kubernetes.io/projected/6cd82862-2bef-4d86-be4e-38f670a252bd-kube-api-access-rqtlz\") pod \"nova-cell1-db-create-mz9qm\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.091153 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-operator-scripts\") pod \"nova-api-89bf-account-create-update-s9p8t\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.091180 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd82862-2bef-4d86-be4e-38f670a252bd-operator-scripts\") pod \"nova-cell1-db-create-mz9qm\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.091224 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdhf\" (UniqueName: \"kubernetes.io/projected/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-kube-api-access-kvdhf\") pod \"nova-api-89bf-account-create-update-s9p8t\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.093113 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.100860 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7c52-account-create-update-bc4lx"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.175287 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9204-account-create-update-pczk5"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.176486 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.183706 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.187601 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.192501 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqtlz\" (UniqueName: \"kubernetes.io/projected/6cd82862-2bef-4d86-be4e-38f670a252bd-kube-api-access-rqtlz\") pod \"nova-cell1-db-create-mz9qm\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.192984 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa80552-6dc1-43b4-ba32-8fca58595c32-operator-scripts\") pod \"nova-cell0-7c52-account-create-update-bc4lx\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.193041 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djvzs\" (UniqueName: \"kubernetes.io/projected/9aa80552-6dc1-43b4-ba32-8fca58595c32-kube-api-access-djvzs\") pod \"nova-cell0-7c52-account-create-update-bc4lx\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.193090 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-operator-scripts\") pod \"nova-api-89bf-account-create-update-s9p8t\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.193113 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd82862-2bef-4d86-be4e-38f670a252bd-operator-scripts\") pod \"nova-cell1-db-create-mz9qm\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.193210 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdhf\" (UniqueName: \"kubernetes.io/projected/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-kube-api-access-kvdhf\") pod \"nova-api-89bf-account-create-update-s9p8t\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.195639 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-operator-scripts\") pod \"nova-api-89bf-account-create-update-s9p8t\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.196145 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd82862-2bef-4d86-be4e-38f670a252bd-operator-scripts\") pod \"nova-cell1-db-create-mz9qm\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.215384 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9204-account-create-update-pczk5"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.215739 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdhf\" (UniqueName: \"kubernetes.io/projected/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-kube-api-access-kvdhf\") pod \"nova-api-89bf-account-create-update-s9p8t\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.220064 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqtlz\" (UniqueName: \"kubernetes.io/projected/6cd82862-2bef-4d86-be4e-38f670a252bd-kube-api-access-rqtlz\") pod \"nova-cell1-db-create-mz9qm\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.294398 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adf298cb-af81-4272-aacd-2d1342eab106-operator-scripts\") pod \"nova-cell1-9204-account-create-update-pczk5\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.294683 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa80552-6dc1-43b4-ba32-8fca58595c32-operator-scripts\") pod \"nova-cell0-7c52-account-create-update-bc4lx\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.294798 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djvzs\" (UniqueName: \"kubernetes.io/projected/9aa80552-6dc1-43b4-ba32-8fca58595c32-kube-api-access-djvzs\") pod \"nova-cell0-7c52-account-create-update-bc4lx\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.294971 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xsbr\" (UniqueName: \"kubernetes.io/projected/adf298cb-af81-4272-aacd-2d1342eab106-kube-api-access-6xsbr\") pod \"nova-cell1-9204-account-create-update-pczk5\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.295687 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa80552-6dc1-43b4-ba32-8fca58595c32-operator-scripts\") pod \"nova-cell0-7c52-account-create-update-bc4lx\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.314387 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djvzs\" (UniqueName: \"kubernetes.io/projected/9aa80552-6dc1-43b4-ba32-8fca58595c32-kube-api-access-djvzs\") pod \"nova-cell0-7c52-account-create-update-bc4lx\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.351700 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.376012 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.397774 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xsbr\" (UniqueName: \"kubernetes.io/projected/adf298cb-af81-4272-aacd-2d1342eab106-kube-api-access-6xsbr\") pod \"nova-cell1-9204-account-create-update-pczk5\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.397859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adf298cb-af81-4272-aacd-2d1342eab106-operator-scripts\") pod \"nova-cell1-9204-account-create-update-pczk5\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.398846 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adf298cb-af81-4272-aacd-2d1342eab106-operator-scripts\") pod \"nova-cell1-9204-account-create-update-pczk5\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.423732 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xsbr\" (UniqueName: \"kubernetes.io/projected/adf298cb-af81-4272-aacd-2d1342eab106-kube-api-access-6xsbr\") pod \"nova-cell1-9204-account-create-update-pczk5\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.457804 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.517640 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.530677 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.673813 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-rxztf"] Jan 30 08:29:57 crc kubenswrapper[4870]: I0130 08:29:57.804959 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-p626s"] Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.012983 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89bf-account-create-update-s9p8t"] Jan 30 08:29:58 crc kubenswrapper[4870]: W0130 08:29:58.017942 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf41a5ec0_d6c7_47ab_b69f_c6c2a8bc4981.slice/crio-b9366ec5a789f4161d4c8849c5ae5ee374b040f54371faa892bb11cebebfe432 WatchSource:0}: Error finding container b9366ec5a789f4161d4c8849c5ae5ee374b040f54371faa892bb11cebebfe432: Status 404 returned error can't find the container with id b9366ec5a789f4161d4c8849c5ae5ee374b040f54371faa892bb11cebebfe432 Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.084321 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b7679e-30f5-4f8a-96e0-a1581691242d" path="/var/lib/kubelet/pods/e7b7679e-30f5-4f8a-96e0-a1581691242d/volumes" Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.085185 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rxztf" event={"ID":"0467c513-d47e-4251-a042-74a1f0a3ba8e","Type":"ContainerStarted","Data":"5940ff1fdac1d8f6908fb28770dca0cedf697f4f1b3f1ea8731ce8c9ec261c73"} Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.085212 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p626s" event={"ID":"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28","Type":"ContainerStarted","Data":"7f0343f1954a5bd424e045791df859ecb1dc66660b99fda30c6f6833c2f1eac9"} Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.085938 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89bf-account-create-update-s9p8t" event={"ID":"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981","Type":"ContainerStarted","Data":"b9366ec5a789f4161d4c8849c5ae5ee374b040f54371faa892bb11cebebfe432"} Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.094498 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerStarted","Data":"2b005b2e14aaf89586d8bc7aa8c8d809f04257ae6b9ce25164b29b34269d45d4"} Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.128556 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mz9qm"] Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.218683 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.219820 4870 scope.go:117] "RemoveContainer" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.221515 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.243813 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7c52-account-create-update-bc4lx"] Jan 30 08:29:58 crc kubenswrapper[4870]: I0130 08:29:58.291556 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9204-account-create-update-pczk5"] Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.103827 4870 generic.go:334] "Generic (PLEG): container finished" podID="b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" containerID="23a36de41e3e5413c9d4a8e53e9d9062761ceb2d5ea6dc50cc6414dd812317b7" exitCode=0 Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.103928 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p626s" event={"ID":"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28","Type":"ContainerDied","Data":"23a36de41e3e5413c9d4a8e53e9d9062761ceb2d5ea6dc50cc6414dd812317b7"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.105606 4870 generic.go:334] "Generic (PLEG): container finished" podID="f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" containerID="10d8adf976aee141ddedf0f0b7d4a560074ff0040c0d225d7fde8dac560cebcd" exitCode=0 Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.105671 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89bf-account-create-update-s9p8t" event={"ID":"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981","Type":"ContainerDied","Data":"10d8adf976aee141ddedf0f0b7d4a560074ff0040c0d225d7fde8dac560cebcd"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.107106 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerStarted","Data":"f199bc9354b47787d97254410fafef753eb2ceb40e32f6f8bddff5ad8283b2f1"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.108397 4870 generic.go:334] "Generic (PLEG): container finished" podID="6cd82862-2bef-4d86-be4e-38f670a252bd" containerID="046d4010b0f900fe2cbd28328fdfa8554886e3c18049908c92ba7d45ff824b80" exitCode=0 Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.108525 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mz9qm" event={"ID":"6cd82862-2bef-4d86-be4e-38f670a252bd","Type":"ContainerDied","Data":"046d4010b0f900fe2cbd28328fdfa8554886e3c18049908c92ba7d45ff824b80"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.108553 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mz9qm" event={"ID":"6cd82862-2bef-4d86-be4e-38f670a252bd","Type":"ContainerStarted","Data":"576369a1b841249ea912958de987ffaa8827905c5a3d87b5b878429fd342a86a"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.109947 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerStarted","Data":"2d34ee72279320556589f3ce491e8f0d6dfe6c0bb318d8e32fba2f3e68f571e6"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.109975 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerStarted","Data":"da041b1ddf9f3cc720ab7d45cc97fcefcbe1006c471d1d13b08c5f348d603138"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.111099 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9204-account-create-update-pczk5" event={"ID":"adf298cb-af81-4272-aacd-2d1342eab106","Type":"ContainerStarted","Data":"ad4987bcd683a82b2ef435208c93f9f9d4904561809fe23aa9fd681008a558c6"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.111319 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9204-account-create-update-pczk5" event={"ID":"adf298cb-af81-4272-aacd-2d1342eab106","Type":"ContainerStarted","Data":"ea0efa38a2c03791e8448804b38f7329e64d06e4e94b3314f5ffe51172a0bdc0"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.119786 4870 generic.go:334] "Generic (PLEG): container finished" podID="0467c513-d47e-4251-a042-74a1f0a3ba8e" containerID="b3747f1a7b0dcf93ef3e9971ceb218b892bb2531c608e6a07760d677c25d7633" exitCode=0 Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.119904 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rxztf" event={"ID":"0467c513-d47e-4251-a042-74a1f0a3ba8e","Type":"ContainerDied","Data":"b3747f1a7b0dcf93ef3e9971ceb218b892bb2531c608e6a07760d677c25d7633"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.133304 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" event={"ID":"9aa80552-6dc1-43b4-ba32-8fca58595c32","Type":"ContainerStarted","Data":"e7588860011aa90e39e44c8b147a927646ababcde64482fc80c549dc156bfaf7"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.133349 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" event={"ID":"9aa80552-6dc1-43b4-ba32-8fca58595c32","Type":"ContainerStarted","Data":"9957703b2d6046e1e86868f06c5b0be9434430ada46171570bd0491396cf389f"} Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.219351 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9204-account-create-update-pczk5" podStartSLOduration=2.219335163 podStartE2EDuration="2.219335163s" podCreationTimestamp="2026-01-30 08:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:59.217179005 +0000 UTC m=+1237.912726114" watchObservedRunningTime="2026-01-30 08:29:59.219335163 +0000 UTC m=+1237.914882272" Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.245246 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" podStartSLOduration=2.245219876 podStartE2EDuration="2.245219876s" podCreationTimestamp="2026-01-30 08:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:29:59.235269144 +0000 UTC m=+1237.930816253" watchObservedRunningTime="2026-01-30 08:29:59.245219876 +0000 UTC m=+1237.940766985" Jan 30 08:29:59 crc kubenswrapper[4870]: I0130 08:29:59.726563 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.117457 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.149471 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerStarted","Data":"734eba5ad7a5ddcb654b171f45ad07bc974840fb2dfafb87d21bbd4db9b452f9"} Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.152652 4870 generic.go:334] "Generic (PLEG): container finished" podID="adf298cb-af81-4272-aacd-2d1342eab106" containerID="ad4987bcd683a82b2ef435208c93f9f9d4904561809fe23aa9fd681008a558c6" exitCode=0 Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.152710 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9204-account-create-update-pczk5" event={"ID":"adf298cb-af81-4272-aacd-2d1342eab106","Type":"ContainerDied","Data":"ad4987bcd683a82b2ef435208c93f9f9d4904561809fe23aa9fd681008a558c6"} Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.154801 4870 generic.go:334] "Generic (PLEG): container finished" podID="9aa80552-6dc1-43b4-ba32-8fca58595c32" containerID="e7588860011aa90e39e44c8b147a927646ababcde64482fc80c549dc156bfaf7" exitCode=0 Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.155228 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" event={"ID":"9aa80552-6dc1-43b4-ba32-8fca58595c32","Type":"ContainerDied","Data":"e7588860011aa90e39e44c8b147a927646ababcde64482fc80c549dc156bfaf7"} Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.173021 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c"] Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.178039 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.181843 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.182114 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.215946 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c"] Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.416497 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2h5b\" (UniqueName: \"kubernetes.io/projected/3ecb0f40-780e-4f90-84aa-17af92178d88-kube-api-access-l2h5b\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.417432 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ecb0f40-780e-4f90-84aa-17af92178d88-config-volume\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.417626 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ecb0f40-780e-4f90-84aa-17af92178d88-secret-volume\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.439183 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.439345 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.505182 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.509371 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.521948 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2h5b\" (UniqueName: \"kubernetes.io/projected/3ecb0f40-780e-4f90-84aa-17af92178d88-kube-api-access-l2h5b\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.522228 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ecb0f40-780e-4f90-84aa-17af92178d88-config-volume\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.522380 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ecb0f40-780e-4f90-84aa-17af92178d88-secret-volume\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.524082 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ecb0f40-780e-4f90-84aa-17af92178d88-config-volume\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.529819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ecb0f40-780e-4f90-84aa-17af92178d88-secret-volume\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.548141 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2h5b\" (UniqueName: \"kubernetes.io/projected/3ecb0f40-780e-4f90-84aa-17af92178d88-kube-api-access-l2h5b\") pod \"collect-profiles-29496030-smd5c\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.725437 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.830706 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-operator-scripts\") pod \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.830827 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8kz\" (UniqueName: \"kubernetes.io/projected/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-kube-api-access-hn8kz\") pod \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\" (UID: \"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28\") " Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.831498 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" (UID: "b82e1e2b-e78e-4b8f-8303-2ea82b24bf28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.836840 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-kube-api-access-hn8kz" (OuterVolumeSpecName: "kube-api-access-hn8kz") pod "b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" (UID: "b82e1e2b-e78e-4b8f-8303-2ea82b24bf28"). InnerVolumeSpecName "kube-api-access-hn8kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.843944 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.877493 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rxztf" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.931136 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.935318 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0467c513-d47e-4251-a042-74a1f0a3ba8e-operator-scripts\") pod \"0467c513-d47e-4251-a042-74a1f0a3ba8e\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.935489 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkht5\" (UniqueName: \"kubernetes.io/projected/0467c513-d47e-4251-a042-74a1f0a3ba8e-kube-api-access-mkht5\") pod \"0467c513-d47e-4251-a042-74a1f0a3ba8e\" (UID: \"0467c513-d47e-4251-a042-74a1f0a3ba8e\") " Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.936040 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0467c513-d47e-4251-a042-74a1f0a3ba8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0467c513-d47e-4251-a042-74a1f0a3ba8e" (UID: "0467c513-d47e-4251-a042-74a1f0a3ba8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.936063 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.936111 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8kz\" (UniqueName: \"kubernetes.io/projected/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28-kube-api-access-hn8kz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.943076 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0467c513-d47e-4251-a042-74a1f0a3ba8e-kube-api-access-mkht5" (OuterVolumeSpecName: "kube-api-access-mkht5") pod "0467c513-d47e-4251-a042-74a1f0a3ba8e" (UID: "0467c513-d47e-4251-a042-74a1f0a3ba8e"). InnerVolumeSpecName "kube-api-access-mkht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:00 crc kubenswrapper[4870]: I0130 08:30:00.982750 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.038832 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd82862-2bef-4d86-be4e-38f670a252bd-operator-scripts\") pod \"6cd82862-2bef-4d86-be4e-38f670a252bd\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.039167 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqtlz\" (UniqueName: \"kubernetes.io/projected/6cd82862-2bef-4d86-be4e-38f670a252bd-kube-api-access-rqtlz\") pod \"6cd82862-2bef-4d86-be4e-38f670a252bd\" (UID: \"6cd82862-2bef-4d86-be4e-38f670a252bd\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.039239 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-operator-scripts\") pod \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.039363 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvdhf\" (UniqueName: \"kubernetes.io/projected/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-kube-api-access-kvdhf\") pod \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\" (UID: \"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.042313 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" (UID: "f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.045440 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd82862-2bef-4d86-be4e-38f670a252bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cd82862-2bef-4d86-be4e-38f670a252bd" (UID: "6cd82862-2bef-4d86-be4e-38f670a252bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.046828 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkht5\" (UniqueName: \"kubernetes.io/projected/0467c513-d47e-4251-a042-74a1f0a3ba8e-kube-api-access-mkht5\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.046858 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd82862-2bef-4d86-be4e-38f670a252bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.046869 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.046900 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0467c513-d47e-4251-a042-74a1f0a3ba8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.052453 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd82862-2bef-4d86-be4e-38f670a252bd-kube-api-access-rqtlz" (OuterVolumeSpecName: "kube-api-access-rqtlz") pod "6cd82862-2bef-4d86-be4e-38f670a252bd" (UID: "6cd82862-2bef-4d86-be4e-38f670a252bd"). InnerVolumeSpecName "kube-api-access-rqtlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.061033 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-kube-api-access-kvdhf" (OuterVolumeSpecName: "kube-api-access-kvdhf") pod "f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" (UID: "f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981"). InnerVolumeSpecName "kube-api-access-kvdhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.149139 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqtlz\" (UniqueName: \"kubernetes.io/projected/6cd82862-2bef-4d86-be4e-38f670a252bd-kube-api-access-rqtlz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.150063 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvdhf\" (UniqueName: \"kubernetes.io/projected/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981-kube-api-access-kvdhf\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.166107 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89bf-account-create-update-s9p8t" event={"ID":"f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981","Type":"ContainerDied","Data":"b9366ec5a789f4161d4c8849c5ae5ee374b040f54371faa892bb11cebebfe432"} Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.166142 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9366ec5a789f4161d4c8849c5ae5ee374b040f54371faa892bb11cebebfe432" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.166214 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89bf-account-create-update-s9p8t" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.167909 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mz9qm" event={"ID":"6cd82862-2bef-4d86-be4e-38f670a252bd","Type":"ContainerDied","Data":"576369a1b841249ea912958de987ffaa8827905c5a3d87b5b878429fd342a86a"} Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.167942 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="576369a1b841249ea912958de987ffaa8827905c5a3d87b5b878429fd342a86a" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.168008 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mz9qm" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.172953 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rxztf" event={"ID":"0467c513-d47e-4251-a042-74a1f0a3ba8e","Type":"ContainerDied","Data":"5940ff1fdac1d8f6908fb28770dca0cedf697f4f1b3f1ea8731ce8c9ec261c73"} Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.173003 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5940ff1fdac1d8f6908fb28770dca0cedf697f4f1b3f1ea8731ce8c9ec261c73" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.173081 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rxztf" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.181979 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p626s" event={"ID":"b82e1e2b-e78e-4b8f-8303-2ea82b24bf28","Type":"ContainerDied","Data":"7f0343f1954a5bd424e045791df859ecb1dc66660b99fda30c6f6833c2f1eac9"} Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.182021 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0343f1954a5bd424e045791df859ecb1dc66660b99fda30c6f6833c2f1eac9" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.182079 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p626s" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.189094 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"204a0d39-f7b0-4468-a82f-9fcc49fc1281","Type":"ContainerStarted","Data":"219179db9ef08389269b0601fc5a735b1c5004657ec7eeb3fa4440e5907cfe2d"} Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.189317 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.189342 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.212751 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.603126337 podStartE2EDuration="41.212717383s" podCreationTimestamp="2026-01-30 08:29:20 +0000 UTC" firstStartedPulling="2026-01-30 08:29:23.751755258 +0000 UTC m=+1202.447302367" lastFinishedPulling="2026-01-30 08:30:00.361346314 +0000 UTC m=+1239.056893413" observedRunningTime="2026-01-30 08:30:01.212636041 +0000 UTC m=+1239.908183160" watchObservedRunningTime="2026-01-30 08:30:01.212717383 +0000 UTC m=+1239.908264492" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.367372 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c"] Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.722053 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.730836 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.782946 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adf298cb-af81-4272-aacd-2d1342eab106-operator-scripts\") pod \"adf298cb-af81-4272-aacd-2d1342eab106\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.783307 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa80552-6dc1-43b4-ba32-8fca58595c32-operator-scripts\") pod \"9aa80552-6dc1-43b4-ba32-8fca58595c32\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.783385 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djvzs\" (UniqueName: \"kubernetes.io/projected/9aa80552-6dc1-43b4-ba32-8fca58595c32-kube-api-access-djvzs\") pod \"9aa80552-6dc1-43b4-ba32-8fca58595c32\" (UID: \"9aa80552-6dc1-43b4-ba32-8fca58595c32\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.783408 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xsbr\" (UniqueName: \"kubernetes.io/projected/adf298cb-af81-4272-aacd-2d1342eab106-kube-api-access-6xsbr\") pod \"adf298cb-af81-4272-aacd-2d1342eab106\" (UID: \"adf298cb-af81-4272-aacd-2d1342eab106\") " Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.785159 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa80552-6dc1-43b4-ba32-8fca58595c32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9aa80552-6dc1-43b4-ba32-8fca58595c32" (UID: "9aa80552-6dc1-43b4-ba32-8fca58595c32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.785239 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf298cb-af81-4272-aacd-2d1342eab106-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adf298cb-af81-4272-aacd-2d1342eab106" (UID: "adf298cb-af81-4272-aacd-2d1342eab106"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.794423 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf298cb-af81-4272-aacd-2d1342eab106-kube-api-access-6xsbr" (OuterVolumeSpecName: "kube-api-access-6xsbr") pod "adf298cb-af81-4272-aacd-2d1342eab106" (UID: "adf298cb-af81-4272-aacd-2d1342eab106"). InnerVolumeSpecName "kube-api-access-6xsbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.794534 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa80552-6dc1-43b4-ba32-8fca58595c32-kube-api-access-djvzs" (OuterVolumeSpecName: "kube-api-access-djvzs") pod "9aa80552-6dc1-43b4-ba32-8fca58595c32" (UID: "9aa80552-6dc1-43b4-ba32-8fca58595c32"). InnerVolumeSpecName "kube-api-access-djvzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.886039 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adf298cb-af81-4272-aacd-2d1342eab106-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.886068 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa80552-6dc1-43b4-ba32-8fca58595c32-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.886080 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djvzs\" (UniqueName: \"kubernetes.io/projected/9aa80552-6dc1-43b4-ba32-8fca58595c32-kube-api-access-djvzs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:01 crc kubenswrapper[4870]: I0130 08:30:01.886090 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xsbr\" (UniqueName: \"kubernetes.io/projected/adf298cb-af81-4272-aacd-2d1342eab106-kube-api-access-6xsbr\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.201648 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9204-account-create-update-pczk5" event={"ID":"adf298cb-af81-4272-aacd-2d1342eab106","Type":"ContainerDied","Data":"ea0efa38a2c03791e8448804b38f7329e64d06e4e94b3314f5ffe51172a0bdc0"} Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.201686 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0efa38a2c03791e8448804b38f7329e64d06e4e94b3314f5ffe51172a0bdc0" Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.203789 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" event={"ID":"9aa80552-6dc1-43b4-ba32-8fca58595c32","Type":"ContainerDied","Data":"9957703b2d6046e1e86868f06c5b0be9434430ada46171570bd0491396cf389f"} Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.203813 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9957703b2d6046e1e86868f06c5b0be9434430ada46171570bd0491396cf389f" Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.203936 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7c52-account-create-update-bc4lx" Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.204031 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9204-account-create-update-pczk5" Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.207047 4870 generic.go:334] "Generic (PLEG): container finished" podID="3ecb0f40-780e-4f90-84aa-17af92178d88" containerID="dad0dbfc8aebf8b014e37d2f50b6d2deebcdfeb8419d761ea14c44680273c1c3" exitCode=0 Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.207157 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" event={"ID":"3ecb0f40-780e-4f90-84aa-17af92178d88","Type":"ContainerDied","Data":"dad0dbfc8aebf8b014e37d2f50b6d2deebcdfeb8419d761ea14c44680273c1c3"} Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.207296 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" event={"ID":"3ecb0f40-780e-4f90-84aa-17af92178d88","Type":"ContainerStarted","Data":"a526ed944565bd757b9a4bba685fb1dd9feaa4eda6024207094fe861d1f2ad25"} Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.212245 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerStarted","Data":"1bc5247fa1f481f03b4623f335ffd2271a8804ce7729b91b95a52302230b32d9"} Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.212960 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-central-agent" containerID="cri-o://da041b1ddf9f3cc720ab7d45cc97fcefcbe1006c471d1d13b08c5f348d603138" gracePeriod=30 Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.213067 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="proxy-httpd" containerID="cri-o://1bc5247fa1f481f03b4623f335ffd2271a8804ce7729b91b95a52302230b32d9" gracePeriod=30 Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.213107 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="sg-core" containerID="cri-o://734eba5ad7a5ddcb654b171f45ad07bc974840fb2dfafb87d21bbd4db9b452f9" gracePeriod=30 Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.213139 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-notification-agent" containerID="cri-o://2d34ee72279320556589f3ce491e8f0d6dfe6c0bb318d8e32fba2f3e68f571e6" gracePeriod=30 Jan 30 08:30:02 crc kubenswrapper[4870]: I0130 08:30:02.242791 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.336838056 podStartE2EDuration="6.242775747s" podCreationTimestamp="2026-01-30 08:29:56 +0000 UTC" firstStartedPulling="2026-01-30 08:29:57.596842816 +0000 UTC m=+1236.292389925" lastFinishedPulling="2026-01-30 08:30:01.502780507 +0000 UTC m=+1240.198327616" observedRunningTime="2026-01-30 08:30:02.241363892 +0000 UTC m=+1240.936911001" watchObservedRunningTime="2026-01-30 08:30:02.242775747 +0000 UTC m=+1240.938322856" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.221768 4870 generic.go:334] "Generic (PLEG): container finished" podID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerID="1bc5247fa1f481f03b4623f335ffd2271a8804ce7729b91b95a52302230b32d9" exitCode=0 Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.221798 4870 generic.go:334] "Generic (PLEG): container finished" podID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerID="734eba5ad7a5ddcb654b171f45ad07bc974840fb2dfafb87d21bbd4db9b452f9" exitCode=2 Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.221806 4870 generic.go:334] "Generic (PLEG): container finished" podID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerID="2d34ee72279320556589f3ce491e8f0d6dfe6c0bb318d8e32fba2f3e68f571e6" exitCode=0 Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.221965 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerDied","Data":"1bc5247fa1f481f03b4623f335ffd2271a8804ce7729b91b95a52302230b32d9"} Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.221989 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerDied","Data":"734eba5ad7a5ddcb654b171f45ad07bc974840fb2dfafb87d21bbd4db9b452f9"} Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.221998 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerDied","Data":"2d34ee72279320556589f3ce491e8f0d6dfe6c0bb318d8e32fba2f3e68f571e6"} Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.375334 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.376108 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.386507 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.386583 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.428368 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.438753 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.480348 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.592354 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.729088 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2h5b\" (UniqueName: \"kubernetes.io/projected/3ecb0f40-780e-4f90-84aa-17af92178d88-kube-api-access-l2h5b\") pod \"3ecb0f40-780e-4f90-84aa-17af92178d88\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.729251 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ecb0f40-780e-4f90-84aa-17af92178d88-config-volume\") pod \"3ecb0f40-780e-4f90-84aa-17af92178d88\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.729411 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ecb0f40-780e-4f90-84aa-17af92178d88-secret-volume\") pod \"3ecb0f40-780e-4f90-84aa-17af92178d88\" (UID: \"3ecb0f40-780e-4f90-84aa-17af92178d88\") " Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.730099 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecb0f40-780e-4f90-84aa-17af92178d88-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ecb0f40-780e-4f90-84aa-17af92178d88" (UID: "3ecb0f40-780e-4f90-84aa-17af92178d88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.738785 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecb0f40-780e-4f90-84aa-17af92178d88-kube-api-access-l2h5b" (OuterVolumeSpecName: "kube-api-access-l2h5b") pod "3ecb0f40-780e-4f90-84aa-17af92178d88" (UID: "3ecb0f40-780e-4f90-84aa-17af92178d88"). InnerVolumeSpecName "kube-api-access-l2h5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.743755 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecb0f40-780e-4f90-84aa-17af92178d88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ecb0f40-780e-4f90-84aa-17af92178d88" (UID: "3ecb0f40-780e-4f90-84aa-17af92178d88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.835121 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ecb0f40-780e-4f90-84aa-17af92178d88-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.835155 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2h5b\" (UniqueName: \"kubernetes.io/projected/3ecb0f40-780e-4f90-84aa-17af92178d88-kube-api-access-l2h5b\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:03 crc kubenswrapper[4870]: I0130 08:30:03.835165 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ecb0f40-780e-4f90-84aa-17af92178d88-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:04 crc kubenswrapper[4870]: I0130 08:30:04.234095 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" event={"ID":"3ecb0f40-780e-4f90-84aa-17af92178d88","Type":"ContainerDied","Data":"a526ed944565bd757b9a4bba685fb1dd9feaa4eda6024207094fe861d1f2ad25"} Jan 30 08:30:04 crc kubenswrapper[4870]: I0130 08:30:04.234148 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a526ed944565bd757b9a4bba685fb1dd9feaa4eda6024207094fe861d1f2ad25" Jan 30 08:30:04 crc kubenswrapper[4870]: I0130 08:30:04.234299 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c" Jan 30 08:30:04 crc kubenswrapper[4870]: I0130 08:30:04.234737 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:04 crc kubenswrapper[4870]: I0130 08:30:04.234800 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:05 crc kubenswrapper[4870]: I0130 08:30:05.950972 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:05 crc kubenswrapper[4870]: I0130 08:30:05.960346 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.384799 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gs8vz"] Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385388 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385400 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385421 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd82862-2bef-4d86-be4e-38f670a252bd" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385427 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd82862-2bef-4d86-be4e-38f670a252bd" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385442 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecb0f40-780e-4f90-84aa-17af92178d88" containerName="collect-profiles" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385448 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecb0f40-780e-4f90-84aa-17af92178d88" containerName="collect-profiles" Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385457 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa80552-6dc1-43b4-ba32-8fca58595c32" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385463 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa80552-6dc1-43b4-ba32-8fca58595c32" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385477 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385483 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385504 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0467c513-d47e-4251-a042-74a1f0a3ba8e" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385510 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0467c513-d47e-4251-a042-74a1f0a3ba8e" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: E0130 08:30:07.385521 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf298cb-af81-4272-aacd-2d1342eab106" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385527 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf298cb-af81-4272-aacd-2d1342eab106" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385680 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385693 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0467c513-d47e-4251-a042-74a1f0a3ba8e" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385706 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf298cb-af81-4272-aacd-2d1342eab106" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385716 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385728 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa80552-6dc1-43b4-ba32-8fca58595c32" containerName="mariadb-account-create-update" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385740 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecb0f40-780e-4f90-84aa-17af92178d88" containerName="collect-profiles" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.385749 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd82862-2bef-4d86-be4e-38f670a252bd" containerName="mariadb-database-create" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.386487 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.388652 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4t5s8" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.388816 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.388961 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.398080 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gs8vz"] Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.513666 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.513740 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-scripts\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.513799 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-config-data\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.513901 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvq4\" (UniqueName: \"kubernetes.io/projected/463149ce-687b-479c-ab61-030371f69acb-kube-api-access-ddvq4\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.615792 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.616209 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-scripts\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.616266 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-config-data\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.616313 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvq4\" (UniqueName: \"kubernetes.io/projected/463149ce-687b-479c-ab61-030371f69acb-kube-api-access-ddvq4\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.626727 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-config-data\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.633203 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-scripts\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.633742 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.635732 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvq4\" (UniqueName: \"kubernetes.io/projected/463149ce-687b-479c-ab61-030371f69acb-kube-api-access-ddvq4\") pod \"nova-cell0-conductor-db-sync-gs8vz\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:07 crc kubenswrapper[4870]: I0130 08:30:07.704675 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.218103 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.221814 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gs8vz"] Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.253474 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.273866 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" event={"ID":"463149ce-687b-479c-ab61-030371f69acb","Type":"ContainerStarted","Data":"6689eaccda3509e40128b331bffb4a6b1096fb417f907160a5fb9c550c8df122"} Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.274143 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.314700 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:08 crc kubenswrapper[4870]: I0130 08:30:08.372175 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:30:10 crc kubenswrapper[4870]: I0130 08:30:10.303912 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" containerID="cri-o://f199bc9354b47787d97254410fafef753eb2ceb40e32f6f8bddff5ad8283b2f1" gracePeriod=30 Jan 30 08:30:11 crc kubenswrapper[4870]: I0130 08:30:11.320826 4870 generic.go:334] "Generic (PLEG): container finished" podID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerID="da041b1ddf9f3cc720ab7d45cc97fcefcbe1006c471d1d13b08c5f348d603138" exitCode=0 Jan 30 08:30:11 crc kubenswrapper[4870]: I0130 08:30:11.321179 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerDied","Data":"da041b1ddf9f3cc720ab7d45cc97fcefcbe1006c471d1d13b08c5f348d603138"} Jan 30 08:30:12 crc kubenswrapper[4870]: I0130 08:30:12.334467 4870 generic.go:334] "Generic (PLEG): container finished" podID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerID="f199bc9354b47787d97254410fafef753eb2ceb40e32f6f8bddff5ad8283b2f1" exitCode=0 Jan 30 08:30:12 crc kubenswrapper[4870]: I0130 08:30:12.334508 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerDied","Data":"f199bc9354b47787d97254410fafef753eb2ceb40e32f6f8bddff5ad8283b2f1"} Jan 30 08:30:12 crc kubenswrapper[4870]: I0130 08:30:12.334539 4870 scope.go:117] "RemoveContainer" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.060447 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.069418 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba\": container with ID starting with bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba not found: ID does not exist" containerID="bbcef241c5e081af384f050ba2bd88c0480b531f6aa26fa9ee5db11c38f751ba" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.069487 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.098940 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8628af25-d5e4-46a0-adec-4c25ca39676b-logs\") pod \"8628af25-d5e4-46a0-adec-4c25ca39676b\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099050 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-log-httpd\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099114 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-custom-prometheus-ca\") pod \"8628af25-d5e4-46a0-adec-4c25ca39676b\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099152 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-config-data\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099168 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-combined-ca-bundle\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099282 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-run-httpd\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099305 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-config-data\") pod \"8628af25-d5e4-46a0-adec-4c25ca39676b\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099325 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvh7f\" (UniqueName: \"kubernetes.io/projected/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-kube-api-access-dvh7f\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099363 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-scripts\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.099391 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-sg-core-conf-yaml\") pod \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\" (UID: \"fa8c4e64-0886-44a9-95cb-6d6cc56748c1\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.101379 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.101632 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.101858 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8628af25-d5e4-46a0-adec-4c25ca39676b-logs" (OuterVolumeSpecName: "logs") pod "8628af25-d5e4-46a0-adec-4c25ca39676b" (UID: "8628af25-d5e4-46a0-adec-4c25ca39676b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.109226 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-kube-api-access-dvh7f" (OuterVolumeSpecName: "kube-api-access-dvh7f") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "kube-api-access-dvh7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.128257 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-scripts" (OuterVolumeSpecName: "scripts") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.142784 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8628af25-d5e4-46a0-adec-4c25ca39676b" (UID: "8628af25-d5e4-46a0-adec-4c25ca39676b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.152633 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.170821 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-config-data" (OuterVolumeSpecName: "config-data") pod "8628af25-d5e4-46a0-adec-4c25ca39676b" (UID: "8628af25-d5e4-46a0-adec-4c25ca39676b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.201002 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9h65\" (UniqueName: \"kubernetes.io/projected/8628af25-d5e4-46a0-adec-4c25ca39676b-kube-api-access-l9h65\") pod \"8628af25-d5e4-46a0-adec-4c25ca39676b\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.201637 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-combined-ca-bundle\") pod \"8628af25-d5e4-46a0-adec-4c25ca39676b\" (UID: \"8628af25-d5e4-46a0-adec-4c25ca39676b\") " Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202461 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8628af25-d5e4-46a0-adec-4c25ca39676b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202491 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202515 4870 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202603 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202614 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202624 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvh7f\" (UniqueName: \"kubernetes.io/projected/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-kube-api-access-dvh7f\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202632 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.202675 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.206821 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8628af25-d5e4-46a0-adec-4c25ca39676b-kube-api-access-l9h65" (OuterVolumeSpecName: "kube-api-access-l9h65") pod "8628af25-d5e4-46a0-adec-4c25ca39676b" (UID: "8628af25-d5e4-46a0-adec-4c25ca39676b"). InnerVolumeSpecName "kube-api-access-l9h65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.213173 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-config-data" (OuterVolumeSpecName: "config-data") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.222114 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8c4e64-0886-44a9-95cb-6d6cc56748c1" (UID: "fa8c4e64-0886-44a9-95cb-6d6cc56748c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.237075 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8628af25-d5e4-46a0-adec-4c25ca39676b" (UID: "8628af25-d5e4-46a0-adec-4c25ca39676b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.304777 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.304809 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4e64-0886-44a9-95cb-6d6cc56748c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.304821 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8628af25-d5e4-46a0-adec-4c25ca39676b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.304831 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9h65\" (UniqueName: \"kubernetes.io/projected/8628af25-d5e4-46a0-adec-4c25ca39676b-kube-api-access-l9h65\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.373785 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8628af25-d5e4-46a0-adec-4c25ca39676b","Type":"ContainerDied","Data":"ac51c1c09a44741e0ef19b1f29a0de83b725ade9e973b9a5af1ac06cc507c0dc"} Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.373795 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.373844 4870 scope.go:117] "RemoveContainer" containerID="f199bc9354b47787d97254410fafef753eb2ceb40e32f6f8bddff5ad8283b2f1" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.378597 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa8c4e64-0886-44a9-95cb-6d6cc56748c1","Type":"ContainerDied","Data":"2b005b2e14aaf89586d8bc7aa8c8d809f04257ae6b9ce25164b29b34269d45d4"} Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.378706 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.382076 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" event={"ID":"463149ce-687b-479c-ab61-030371f69acb","Type":"ContainerStarted","Data":"ebc3f13bad52a8c63665a782767852e60e31712851103659b19d6a855c623701"} Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.407743 4870 scope.go:117] "RemoveContainer" containerID="1bc5247fa1f481f03b4623f335ffd2271a8804ce7729b91b95a52302230b32d9" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.410211 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" podStartSLOduration=1.5689813799999999 podStartE2EDuration="9.410199593s" podCreationTimestamp="2026-01-30 08:30:07 +0000 UTC" firstStartedPulling="2026-01-30 08:30:08.234963205 +0000 UTC m=+1246.930510324" lastFinishedPulling="2026-01-30 08:30:16.076181418 +0000 UTC m=+1254.771728537" observedRunningTime="2026-01-30 08:30:16.39580135 +0000 UTC m=+1255.091348449" watchObservedRunningTime="2026-01-30 08:30:16.410199593 +0000 UTC m=+1255.105746702" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.434123 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.447673 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.460212 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.464641 4870 scope.go:117] "RemoveContainer" containerID="734eba5ad7a5ddcb654b171f45ad07bc974840fb2dfafb87d21bbd4db9b452f9" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.488264 4870 scope.go:117] "RemoveContainer" containerID="2d34ee72279320556589f3ce491e8f0d6dfe6c0bb318d8e32fba2f3e68f571e6" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.498250 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.506614 4870 scope.go:117] "RemoveContainer" containerID="da041b1ddf9f3cc720ab7d45cc97fcefcbe1006c471d1d13b08c5f348d603138" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516464 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516821 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="proxy-httpd" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516836 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="proxy-httpd" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516850 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516856 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516865 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-central-agent" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516870 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-central-agent" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516896 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516903 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516918 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-notification-agent" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516923 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-notification-agent" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516937 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516943 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516954 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516960 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: E0130 08:30:16.516971 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="sg-core" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.516976 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="sg-core" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517137 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-notification-agent" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517150 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517157 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="sg-core" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517194 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="proxy-httpd" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517206 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517218 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517226 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" containerName="watcher-decision-engine" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517240 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" containerName="ceilometer-central-agent" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.517797 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.520616 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.540723 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.543455 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.547219 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.547598 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.556144 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.568946 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613254 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613318 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-config-data\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613365 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613423 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613458 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-run-httpd\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613682 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.613925 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-log-httpd\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.614006 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65fb\" (UniqueName: \"kubernetes.io/projected/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-kube-api-access-v65fb\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.614056 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjx8w\" (UniqueName: \"kubernetes.io/projected/83b9fe73-9106-4f9b-9272-6f12e3fb8177-kube-api-access-pjx8w\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.614273 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-scripts\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.614392 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-config-data\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.614519 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b9fe73-9106-4f9b-9272-6f12e3fb8177-logs\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b9fe73-9106-4f9b-9272-6f12e3fb8177-logs\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717117 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717161 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-config-data\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717218 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717299 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717340 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-run-httpd\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717434 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.717606 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-log-httpd\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.718581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-run-httpd\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.718811 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83b9fe73-9106-4f9b-9272-6f12e3fb8177-logs\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.718953 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-log-httpd\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.719165 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65fb\" (UniqueName: \"kubernetes.io/projected/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-kube-api-access-v65fb\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.719222 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjx8w\" (UniqueName: \"kubernetes.io/projected/83b9fe73-9106-4f9b-9272-6f12e3fb8177-kube-api-access-pjx8w\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.719314 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-scripts\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.719370 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-config-data\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.724261 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.724312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.725588 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-scripts\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.725784 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-config-data\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.736983 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-config-data\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.739695 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/83b9fe73-9106-4f9b-9272-6f12e3fb8177-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.740031 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.744408 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjx8w\" (UniqueName: \"kubernetes.io/projected/83b9fe73-9106-4f9b-9272-6f12e3fb8177-kube-api-access-pjx8w\") pod \"watcher-decision-engine-0\" (UID: \"83b9fe73-9106-4f9b-9272-6f12e3fb8177\") " pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.749411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65fb\" (UniqueName: \"kubernetes.io/projected/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-kube-api-access-v65fb\") pod \"ceilometer-0\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " pod="openstack/ceilometer-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.839332 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:16 crc kubenswrapper[4870]: I0130 08:30:16.861228 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:17 crc kubenswrapper[4870]: I0130 08:30:17.331129 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 30 08:30:17 crc kubenswrapper[4870]: W0130 08:30:17.333552 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83b9fe73_9106_4f9b_9272_6f12e3fb8177.slice/crio-59cd94fd7fcdb82830f502c0f40b770522ca121550e6d1d09e54edebbcbae41d WatchSource:0}: Error finding container 59cd94fd7fcdb82830f502c0f40b770522ca121550e6d1d09e54edebbcbae41d: Status 404 returned error can't find the container with id 59cd94fd7fcdb82830f502c0f40b770522ca121550e6d1d09e54edebbcbae41d Jan 30 08:30:17 crc kubenswrapper[4870]: I0130 08:30:17.401739 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"83b9fe73-9106-4f9b-9272-6f12e3fb8177","Type":"ContainerStarted","Data":"59cd94fd7fcdb82830f502c0f40b770522ca121550e6d1d09e54edebbcbae41d"} Jan 30 08:30:17 crc kubenswrapper[4870]: I0130 08:30:17.423840 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:17 crc kubenswrapper[4870]: W0130 08:30:17.433395 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ccaa05d_1e2c_454e_9fa9_80fe3c2397b4.slice/crio-7be4b49268eb8a069cb4d0016e369267f7bb7570d3aba71266ecdbbd2380fa42 WatchSource:0}: Error finding container 7be4b49268eb8a069cb4d0016e369267f7bb7570d3aba71266ecdbbd2380fa42: Status 404 returned error can't find the container with id 7be4b49268eb8a069cb4d0016e369267f7bb7570d3aba71266ecdbbd2380fa42 Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.087535 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8628af25-d5e4-46a0-adec-4c25ca39676b" path="/var/lib/kubelet/pods/8628af25-d5e4-46a0-adec-4c25ca39676b/volumes" Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.088703 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8c4e64-0886-44a9-95cb-6d6cc56748c1" path="/var/lib/kubelet/pods/fa8c4e64-0886-44a9-95cb-6d6cc56748c1/volumes" Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.424622 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"83b9fe73-9106-4f9b-9272-6f12e3fb8177","Type":"ContainerStarted","Data":"eb647bce7c3be1548b65527b9d43a9c627a697f978ae8303fb810920844ec093"} Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.428299 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerStarted","Data":"38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2"} Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.428371 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerStarted","Data":"1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7"} Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.428385 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerStarted","Data":"7be4b49268eb8a069cb4d0016e369267f7bb7570d3aba71266ecdbbd2380fa42"} Jan 30 08:30:18 crc kubenswrapper[4870]: I0130 08:30:18.448485 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.448464033 podStartE2EDuration="2.448464033s" podCreationTimestamp="2026-01-30 08:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:18.448213315 +0000 UTC m=+1257.143760424" watchObservedRunningTime="2026-01-30 08:30:18.448464033 +0000 UTC m=+1257.144011142" Jan 30 08:30:19 crc kubenswrapper[4870]: I0130 08:30:19.449414 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerStarted","Data":"50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea"} Jan 30 08:30:21 crc kubenswrapper[4870]: I0130 08:30:21.470224 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerStarted","Data":"e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261"} Jan 30 08:30:21 crc kubenswrapper[4870]: I0130 08:30:21.470619 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:30:21 crc kubenswrapper[4870]: I0130 08:30:21.503867 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.446582854 podStartE2EDuration="5.50384692s" podCreationTimestamp="2026-01-30 08:30:16 +0000 UTC" firstStartedPulling="2026-01-30 08:30:17.436202689 +0000 UTC m=+1256.131749798" lastFinishedPulling="2026-01-30 08:30:20.493466755 +0000 UTC m=+1259.189013864" observedRunningTime="2026-01-30 08:30:21.500755313 +0000 UTC m=+1260.196302422" watchObservedRunningTime="2026-01-30 08:30:21.50384692 +0000 UTC m=+1260.199394039" Jan 30 08:30:26 crc kubenswrapper[4870]: I0130 08:30:26.840610 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:26 crc kubenswrapper[4870]: I0130 08:30:26.873839 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.141774 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.142121 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-central-agent" containerID="cri-o://1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7" gracePeriod=30 Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.142165 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="proxy-httpd" containerID="cri-o://e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261" gracePeriod=30 Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.142215 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-notification-agent" containerID="cri-o://38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2" gracePeriod=30 Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.142259 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="sg-core" containerID="cri-o://50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea" gracePeriod=30 Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.533610 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerID="e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261" exitCode=0 Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.533995 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerID="50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea" exitCode=2 Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.533708 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerDied","Data":"e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261"} Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.534104 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerDied","Data":"50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea"} Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.534317 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:27 crc kubenswrapper[4870]: I0130 08:30:27.568599 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 30 08:30:28 crc kubenswrapper[4870]: I0130 08:30:28.549354 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerID="1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7" exitCode=0 Jan 30 08:30:28 crc kubenswrapper[4870]: I0130 08:30:28.549436 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerDied","Data":"1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7"} Jan 30 08:30:29 crc kubenswrapper[4870]: I0130 08:30:29.957418 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090366 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-config-data\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090405 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-combined-ca-bundle\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090452 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-run-httpd\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090492 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-scripts\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090539 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-sg-core-conf-yaml\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090575 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-log-httpd\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.090608 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v65fb\" (UniqueName: \"kubernetes.io/projected/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-kube-api-access-v65fb\") pod \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\" (UID: \"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4\") " Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.091310 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.091779 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.096031 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-scripts" (OuterVolumeSpecName: "scripts") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.098073 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-kube-api-access-v65fb" (OuterVolumeSpecName: "kube-api-access-v65fb") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "kube-api-access-v65fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.127115 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.179268 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.193013 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.193058 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.193070 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v65fb\" (UniqueName: \"kubernetes.io/projected/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-kube-api-access-v65fb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.193085 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.193096 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.193104 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.197598 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-config-data" (OuterVolumeSpecName: "config-data") pod "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" (UID: "2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.295256 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.587402 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerID="38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2" exitCode=0 Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.587469 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerDied","Data":"38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2"} Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.587509 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4","Type":"ContainerDied","Data":"7be4b49268eb8a069cb4d0016e369267f7bb7570d3aba71266ecdbbd2380fa42"} Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.587538 4870 scope.go:117] "RemoveContainer" containerID="e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.587720 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.668748 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.674633 4870 scope.go:117] "RemoveContainer" containerID="50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.695523 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.707773 4870 scope.go:117] "RemoveContainer" containerID="38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.732028 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.732579 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-notification-agent" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.732982 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-notification-agent" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.732998 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="proxy-httpd" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733005 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="proxy-httpd" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.733030 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-central-agent" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733037 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-central-agent" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.733064 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="sg-core" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733620 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="sg-core" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733846 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-notification-agent" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733868 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="sg-core" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733903 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="ceilometer-central-agent" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.733917 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" containerName="proxy-httpd" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.736632 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.739587 4870 scope.go:117] "RemoveContainer" containerID="1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.739862 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.740139 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.743415 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.775933 4870 scope.go:117] "RemoveContainer" containerID="e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.776958 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261\": container with ID starting with e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261 not found: ID does not exist" containerID="e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.777016 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261"} err="failed to get container status \"e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261\": rpc error: code = NotFound desc = could not find container \"e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261\": container with ID starting with e4c1e39919fd158510ed9ffc817f40cc08f586836a067977cb44e8b211a54261 not found: ID does not exist" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.777044 4870 scope.go:117] "RemoveContainer" containerID="50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.777545 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea\": container with ID starting with 50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea not found: ID does not exist" containerID="50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.777575 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea"} err="failed to get container status \"50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea\": rpc error: code = NotFound desc = could not find container \"50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea\": container with ID starting with 50870a595f976100abdb26b9148f62d048280ed33d045282681532815e480cea not found: ID does not exist" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.777589 4870 scope.go:117] "RemoveContainer" containerID="38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.777960 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2\": container with ID starting with 38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2 not found: ID does not exist" containerID="38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.778105 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2"} err="failed to get container status \"38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2\": rpc error: code = NotFound desc = could not find container \"38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2\": container with ID starting with 38570dc31256f2263922dd40e563dcdb7b675589f50624f3d2f68c1f793370d2 not found: ID does not exist" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.778277 4870 scope.go:117] "RemoveContainer" containerID="1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7" Jan 30 08:30:30 crc kubenswrapper[4870]: E0130 08:30:30.778748 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7\": container with ID starting with 1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7 not found: ID does not exist" containerID="1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.778769 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7"} err="failed to get container status \"1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7\": rpc error: code = NotFound desc = could not find container \"1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7\": container with ID starting with 1517f3c15fd718c9d9cfa8b01e312f1d0b907c9e264d9479f92a08326c0056b7 not found: ID does not exist" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.804415 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.804472 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.804525 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-run-httpd\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.804543 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-log-httpd\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.804739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-config-data\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.804825 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-scripts\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.805000 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wxb\" (UniqueName: \"kubernetes.io/projected/f11d5abc-9e24-41c5-9e26-22a939d70180-kube-api-access-l2wxb\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.907629 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.908183 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-run-httpd\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.908376 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-log-httpd\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.908607 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-config-data\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.908850 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-scripts\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.908863 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-log-httpd\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.908961 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-run-httpd\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.909303 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wxb\" (UniqueName: \"kubernetes.io/projected/f11d5abc-9e24-41c5-9e26-22a939d70180-kube-api-access-l2wxb\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.909832 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.913819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.917560 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-scripts\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.917753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-config-data\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.925815 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:30 crc kubenswrapper[4870]: I0130 08:30:30.927188 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wxb\" (UniqueName: \"kubernetes.io/projected/f11d5abc-9e24-41c5-9e26-22a939d70180-kube-api-access-l2wxb\") pod \"ceilometer-0\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " pod="openstack/ceilometer-0" Jan 30 08:30:31 crc kubenswrapper[4870]: I0130 08:30:31.069217 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:30:31 crc kubenswrapper[4870]: W0130 08:30:31.495858 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf11d5abc_9e24_41c5_9e26_22a939d70180.slice/crio-0c2a4eddab15cef0cbeaace28cc33784a69c48cca42cc8364520ed9bf9b84959 WatchSource:0}: Error finding container 0c2a4eddab15cef0cbeaace28cc33784a69c48cca42cc8364520ed9bf9b84959: Status 404 returned error can't find the container with id 0c2a4eddab15cef0cbeaace28cc33784a69c48cca42cc8364520ed9bf9b84959 Jan 30 08:30:31 crc kubenswrapper[4870]: I0130 08:30:31.501757 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:30:31 crc kubenswrapper[4870]: I0130 08:30:31.598873 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerStarted","Data":"0c2a4eddab15cef0cbeaace28cc33784a69c48cca42cc8364520ed9bf9b84959"} Jan 30 08:30:32 crc kubenswrapper[4870]: I0130 08:30:32.099310 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4" path="/var/lib/kubelet/pods/2ccaa05d-1e2c-454e-9fa9-80fe3c2397b4/volumes" Jan 30 08:30:32 crc kubenswrapper[4870]: I0130 08:30:32.615847 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerStarted","Data":"b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd"} Jan 30 08:30:32 crc kubenswrapper[4870]: I0130 08:30:32.615921 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerStarted","Data":"0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb"} Jan 30 08:30:32 crc kubenswrapper[4870]: I0130 08:30:32.618378 4870 generic.go:334] "Generic (PLEG): container finished" podID="463149ce-687b-479c-ab61-030371f69acb" containerID="ebc3f13bad52a8c63665a782767852e60e31712851103659b19d6a855c623701" exitCode=0 Jan 30 08:30:32 crc kubenswrapper[4870]: I0130 08:30:32.618426 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" event={"ID":"463149ce-687b-479c-ab61-030371f69acb","Type":"ContainerDied","Data":"ebc3f13bad52a8c63665a782767852e60e31712851103659b19d6a855c623701"} Jan 30 08:30:33 crc kubenswrapper[4870]: I0130 08:30:33.648471 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerStarted","Data":"e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3"} Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.169498 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.292850 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-combined-ca-bundle\") pod \"463149ce-687b-479c-ab61-030371f69acb\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.293353 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-config-data\") pod \"463149ce-687b-479c-ab61-030371f69acb\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.293461 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddvq4\" (UniqueName: \"kubernetes.io/projected/463149ce-687b-479c-ab61-030371f69acb-kube-api-access-ddvq4\") pod \"463149ce-687b-479c-ab61-030371f69acb\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.293621 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-scripts\") pod \"463149ce-687b-479c-ab61-030371f69acb\" (UID: \"463149ce-687b-479c-ab61-030371f69acb\") " Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.298665 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463149ce-687b-479c-ab61-030371f69acb-kube-api-access-ddvq4" (OuterVolumeSpecName: "kube-api-access-ddvq4") pod "463149ce-687b-479c-ab61-030371f69acb" (UID: "463149ce-687b-479c-ab61-030371f69acb"). InnerVolumeSpecName "kube-api-access-ddvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.299116 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-scripts" (OuterVolumeSpecName: "scripts") pod "463149ce-687b-479c-ab61-030371f69acb" (UID: "463149ce-687b-479c-ab61-030371f69acb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.303357 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddvq4\" (UniqueName: \"kubernetes.io/projected/463149ce-687b-479c-ab61-030371f69acb-kube-api-access-ddvq4\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.303419 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.320232 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-config-data" (OuterVolumeSpecName: "config-data") pod "463149ce-687b-479c-ab61-030371f69acb" (UID: "463149ce-687b-479c-ab61-030371f69acb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.326587 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "463149ce-687b-479c-ab61-030371f69acb" (UID: "463149ce-687b-479c-ab61-030371f69acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.404144 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.404174 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463149ce-687b-479c-ab61-030371f69acb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.660808 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" event={"ID":"463149ce-687b-479c-ab61-030371f69acb","Type":"ContainerDied","Data":"6689eaccda3509e40128b331bffb4a6b1096fb417f907160a5fb9c550c8df122"} Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.660856 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6689eaccda3509e40128b331bffb4a6b1096fb417f907160a5fb9c550c8df122" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.660957 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gs8vz" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.761664 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:30:34 crc kubenswrapper[4870]: E0130 08:30:34.762050 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463149ce-687b-479c-ab61-030371f69acb" containerName="nova-cell0-conductor-db-sync" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.762067 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="463149ce-687b-479c-ab61-030371f69acb" containerName="nova-cell0-conductor-db-sync" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.762218 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="463149ce-687b-479c-ab61-030371f69acb" containerName="nova-cell0-conductor-db-sync" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.762810 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.765642 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.765938 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4t5s8" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.782011 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.809907 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9834ddd4-269a-463c-953c-1bf07a7ffdf0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.809997 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9834ddd4-269a-463c-953c-1bf07a7ffdf0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.810097 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvfr\" (UniqueName: \"kubernetes.io/projected/9834ddd4-269a-463c-953c-1bf07a7ffdf0-kube-api-access-8jvfr\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.915069 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9834ddd4-269a-463c-953c-1bf07a7ffdf0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.915171 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9834ddd4-269a-463c-953c-1bf07a7ffdf0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.915260 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jvfr\" (UniqueName: \"kubernetes.io/projected/9834ddd4-269a-463c-953c-1bf07a7ffdf0-kube-api-access-8jvfr\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.919075 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9834ddd4-269a-463c-953c-1bf07a7ffdf0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.919509 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9834ddd4-269a-463c-953c-1bf07a7ffdf0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:34 crc kubenswrapper[4870]: I0130 08:30:34.931751 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jvfr\" (UniqueName: \"kubernetes.io/projected/9834ddd4-269a-463c-953c-1bf07a7ffdf0-kube-api-access-8jvfr\") pod \"nova-cell0-conductor-0\" (UID: \"9834ddd4-269a-463c-953c-1bf07a7ffdf0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:35 crc kubenswrapper[4870]: I0130 08:30:35.079050 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:35 crc kubenswrapper[4870]: I0130 08:30:35.594412 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:30:35 crc kubenswrapper[4870]: I0130 08:30:35.691015 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9834ddd4-269a-463c-953c-1bf07a7ffdf0","Type":"ContainerStarted","Data":"ba7277c7f1d864e9e8adb0b704d5d7dd97c1180da3e942a33f62c02bd5288731"} Jan 30 08:30:35 crc kubenswrapper[4870]: I0130 08:30:35.694693 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerStarted","Data":"722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394"} Jan 30 08:30:35 crc kubenswrapper[4870]: I0130 08:30:35.694821 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:30:35 crc kubenswrapper[4870]: I0130 08:30:35.732137 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.648352429 podStartE2EDuration="5.732111287s" podCreationTimestamp="2026-01-30 08:30:30 +0000 UTC" firstStartedPulling="2026-01-30 08:30:31.500114093 +0000 UTC m=+1270.195661232" lastFinishedPulling="2026-01-30 08:30:34.583872981 +0000 UTC m=+1273.279420090" observedRunningTime="2026-01-30 08:30:35.715072622 +0000 UTC m=+1274.410619751" watchObservedRunningTime="2026-01-30 08:30:35.732111287 +0000 UTC m=+1274.427658406" Jan 30 08:30:36 crc kubenswrapper[4870]: I0130 08:30:36.711762 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9834ddd4-269a-463c-953c-1bf07a7ffdf0","Type":"ContainerStarted","Data":"23b02075ad542fb7c6d85eae2e1e1a8e5e25c2362ace4d98e7c91a23f271e2da"} Jan 30 08:30:36 crc kubenswrapper[4870]: I0130 08:30:36.761107 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7610752659999998 podStartE2EDuration="2.761075266s" podCreationTimestamp="2026-01-30 08:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:36.739504619 +0000 UTC m=+1275.435051728" watchObservedRunningTime="2026-01-30 08:30:36.761075266 +0000 UTC m=+1275.456622415" Jan 30 08:30:37 crc kubenswrapper[4870]: I0130 08:30:37.720650 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.127515 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.756042 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vrk8x"] Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.758772 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.766350 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.766914 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.775785 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vrk8x"] Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.835705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-config-data\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.836147 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hkv\" (UniqueName: \"kubernetes.io/projected/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-kube-api-access-m2hkv\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.836279 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.836453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-scripts\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.940132 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-config-data\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.943585 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2hkv\" (UniqueName: \"kubernetes.io/projected/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-kube-api-access-m2hkv\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.943779 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.944027 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-scripts\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.959724 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.962440 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2hkv\" (UniqueName: \"kubernetes.io/projected/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-kube-api-access-m2hkv\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.966663 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-scripts\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.977725 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-config-data\") pod \"nova-cell0-cell-mapping-vrk8x\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.985159 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.986427 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.990082 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:30:40 crc kubenswrapper[4870]: I0130 08:30:40.996942 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.045499 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.045620 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85spt\" (UniqueName: \"kubernetes.io/projected/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-kube-api-access-85spt\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.045689 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-config-data\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.130199 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.143490 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.144774 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.146637 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.152808 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85spt\" (UniqueName: \"kubernetes.io/projected/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-kube-api-access-85spt\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.152913 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-config-data\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.153075 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.162667 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-config-data\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.178379 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.184371 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.200331 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85spt\" (UniqueName: \"kubernetes.io/projected/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-kube-api-access-85spt\") pod \"nova-scheduler-0\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.235451 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.238542 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.245333 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.247014 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.255977 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.256646 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257707 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257746 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4130fea-be36-47f2-9940-fd3bddcbe3c5-logs\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257834 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257850 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czf2b\" (UniqueName: \"kubernetes.io/projected/a4130fea-be36-47f2-9940-fd3bddcbe3c5-kube-api-access-czf2b\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257926 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdt9v\" (UniqueName: \"kubernetes.io/projected/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-kube-api-access-sdt9v\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257943 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-config-data\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.257963 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.272677 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.306028 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.348673 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7777964479-kzgv2"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.350960 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359154 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdt9v\" (UniqueName: \"kubernetes.io/projected/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-kube-api-access-sdt9v\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359185 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-config-data\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359210 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359272 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35497556-3464-49d4-9dc2-8f8153a1db82-logs\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359292 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359317 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4130fea-be36-47f2-9940-fd3bddcbe3c5-logs\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359332 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359374 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwkz\" (UniqueName: \"kubernetes.io/projected/35497556-3464-49d4-9dc2-8f8153a1db82-kube-api-access-9qwkz\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-config-data\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359445 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.359459 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czf2b\" (UniqueName: \"kubernetes.io/projected/a4130fea-be36-47f2-9940-fd3bddcbe3c5-kube-api-access-czf2b\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.363421 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4130fea-be36-47f2-9940-fd3bddcbe3c5-logs\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.366373 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.367437 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-config-data\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.370789 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.374077 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7777964479-kzgv2"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.379395 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czf2b\" (UniqueName: \"kubernetes.io/projected/a4130fea-be36-47f2-9940-fd3bddcbe3c5-kube-api-access-czf2b\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.381070 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.385337 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.402192 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdt9v\" (UniqueName: \"kubernetes.io/projected/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-kube-api-access-sdt9v\") pod \"nova-cell1-novncproxy-0\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465103 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-swift-storage-0\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465176 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qwkz\" (UniqueName: \"kubernetes.io/projected/35497556-3464-49d4-9dc2-8f8153a1db82-kube-api-access-9qwkz\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-config-data\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465259 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-sb\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465310 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-nb\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465332 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4sd\" (UniqueName: \"kubernetes.io/projected/d5925267-e75f-4398-af96-6856710c57f3-kube-api-access-br4sd\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465353 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-svc\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465375 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-config\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465409 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35497556-3464-49d4-9dc2-8f8153a1db82-logs\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.465440 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.466234 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35497556-3464-49d4-9dc2-8f8153a1db82-logs\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.471437 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-config-data\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.471694 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.493910 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qwkz\" (UniqueName: \"kubernetes.io/projected/35497556-3464-49d4-9dc2-8f8153a1db82-kube-api-access-9qwkz\") pod \"nova-api-0\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.565949 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-config\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.566075 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-swift-storage-0\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.566201 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-sb\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.566266 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-nb\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.566302 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4sd\" (UniqueName: \"kubernetes.io/projected/d5925267-e75f-4398-af96-6856710c57f3-kube-api-access-br4sd\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.566327 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-svc\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.566973 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-swift-storage-0\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.567020 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-config\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.567683 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-svc\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.568170 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-nb\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.568229 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-sb\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.575639 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.585515 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4sd\" (UniqueName: \"kubernetes.io/projected/d5925267-e75f-4398-af96-6856710c57f3-kube-api-access-br4sd\") pod \"dnsmasq-dns-7777964479-kzgv2\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.614647 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.632693 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.730730 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.802138 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vrk8x"] Jan 30 08:30:41 crc kubenswrapper[4870]: W0130 08:30:41.828055 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e3f03b1_ce9f_4f1d_8bb9_eecb941268c5.slice/crio-cabf47e5c9ce25b5156d8706dcfa4a8797c2b4154837e6413d0ebe4fba915dd9 WatchSource:0}: Error finding container cabf47e5c9ce25b5156d8706dcfa4a8797c2b4154837e6413d0ebe4fba915dd9: Status 404 returned error can't find the container with id cabf47e5c9ce25b5156d8706dcfa4a8797c2b4154837e6413d0ebe4fba915dd9 Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.884330 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m882v"] Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.885515 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.887927 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.889035 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 08:30:41 crc kubenswrapper[4870]: I0130 08:30:41.901177 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m882v"] Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.024009 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2s2k\" (UniqueName: \"kubernetes.io/projected/df7d1e35-e72c-4a05-8a4a-89647f93a26c-kube-api-access-d2s2k\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.024395 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-scripts\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.024586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-config-data\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.024835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.039383 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.145290 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-config-data\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.145603 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.145640 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2s2k\" (UniqueName: \"kubernetes.io/projected/df7d1e35-e72c-4a05-8a4a-89647f93a26c-kube-api-access-d2s2k\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.145681 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-scripts\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.169967 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-scripts\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: W0130 08:30:42.170097 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c987d2_eb6f_4ad7_a6b3_97181526dc24.slice/crio-4844fa8b96732628baf7088ac1c015a802eaab1095cb6ae769aae1a9f257db48 WatchSource:0}: Error finding container 4844fa8b96732628baf7088ac1c015a802eaab1095cb6ae769aae1a9f257db48: Status 404 returned error can't find the container with id 4844fa8b96732628baf7088ac1c015a802eaab1095cb6ae769aae1a9f257db48 Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.170223 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2s2k\" (UniqueName: \"kubernetes.io/projected/df7d1e35-e72c-4a05-8a4a-89647f93a26c-kube-api-access-d2s2k\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.172476 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.186464 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-config-data\") pod \"nova-cell1-conductor-db-sync-m882v\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.253816 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.460312 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.564447 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.581075 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.698073 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7777964479-kzgv2"] Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.789100 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-kzgv2" event={"ID":"d5925267-e75f-4398-af96-6856710c57f3","Type":"ContainerStarted","Data":"0d8401900436a6761c400a0be0a0bdd42a9aa8031b291c68c5469cef3ec4cd02"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.791440 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4130fea-be36-47f2-9940-fd3bddcbe3c5","Type":"ContainerStarted","Data":"2bab53308716b22e97e1a942ac95ceca0d270b7f4b42f3d5be8a0178321b83a8"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.794245 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vrk8x" event={"ID":"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5","Type":"ContainerStarted","Data":"d505e04dc454937c02de4ea80fb1b30e9ec281deb651bdc207ab606295f95619"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.794289 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vrk8x" event={"ID":"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5","Type":"ContainerStarted","Data":"cabf47e5c9ce25b5156d8706dcfa4a8797c2b4154837e6413d0ebe4fba915dd9"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.795501 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21f6c18c-fcc7-4bd5-9a86-81dacd111e90","Type":"ContainerStarted","Data":"ad4cc5b31839637e7fda99e5e00f775d4b4a8266d3fec39e6cf700bdb0b11a6c"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.797095 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c987d2-eb6f-4ad7-a6b3-97181526dc24","Type":"ContainerStarted","Data":"4844fa8b96732628baf7088ac1c015a802eaab1095cb6ae769aae1a9f257db48"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.798405 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35497556-3464-49d4-9dc2-8f8153a1db82","Type":"ContainerStarted","Data":"4241ed3c0f6370f180f8f986d85580b542b235a9c663a81abf39c2245d59012a"} Jan 30 08:30:42 crc kubenswrapper[4870]: I0130 08:30:42.822888 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m882v"] Jan 30 08:30:42 crc kubenswrapper[4870]: W0130 08:30:42.831345 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf7d1e35_e72c_4a05_8a4a_89647f93a26c.slice/crio-1180a868984313de66495d96e16fbf7b9e7f4d89169ce27b87d1df78e6c1cbcf WatchSource:0}: Error finding container 1180a868984313de66495d96e16fbf7b9e7f4d89169ce27b87d1df78e6c1cbcf: Status 404 returned error can't find the container with id 1180a868984313de66495d96e16fbf7b9e7f4d89169ce27b87d1df78e6c1cbcf Jan 30 08:30:43 crc kubenswrapper[4870]: I0130 08:30:43.811361 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-kzgv2" event={"ID":"d5925267-e75f-4398-af96-6856710c57f3","Type":"ContainerStarted","Data":"895f5e0a2008516657010356d30e83d3b79850fdf910e5ede0c0b5280b3040c2"} Jan 30 08:30:43 crc kubenswrapper[4870]: I0130 08:30:43.819741 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m882v" event={"ID":"df7d1e35-e72c-4a05-8a4a-89647f93a26c","Type":"ContainerStarted","Data":"8042d48e5c92127670e964e37628c795eb4a833864e07f5be3a23644c40ab2aa"} Jan 30 08:30:43 crc kubenswrapper[4870]: I0130 08:30:43.819789 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m882v" event={"ID":"df7d1e35-e72c-4a05-8a4a-89647f93a26c","Type":"ContainerStarted","Data":"1180a868984313de66495d96e16fbf7b9e7f4d89169ce27b87d1df78e6c1cbcf"} Jan 30 08:30:43 crc kubenswrapper[4870]: I0130 08:30:43.859350 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vrk8x" podStartSLOduration=3.8593329450000002 podStartE2EDuration="3.859332945s" podCreationTimestamp="2026-01-30 08:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:43.856362082 +0000 UTC m=+1282.551909191" watchObservedRunningTime="2026-01-30 08:30:43.859332945 +0000 UTC m=+1282.554880054" Jan 30 08:30:43 crc kubenswrapper[4870]: I0130 08:30:43.879292 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-m882v" podStartSLOduration=2.879272252 podStartE2EDuration="2.879272252s" podCreationTimestamp="2026-01-30 08:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:43.870066574 +0000 UTC m=+1282.565613703" watchObservedRunningTime="2026-01-30 08:30:43.879272252 +0000 UTC m=+1282.574819361" Jan 30 08:30:44 crc kubenswrapper[4870]: I0130 08:30:44.536500 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:30:44 crc kubenswrapper[4870]: I0130 08:30:44.571856 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:44 crc kubenswrapper[4870]: I0130 08:30:44.835615 4870 generic.go:334] "Generic (PLEG): container finished" podID="d5925267-e75f-4398-af96-6856710c57f3" containerID="895f5e0a2008516657010356d30e83d3b79850fdf910e5ede0c0b5280b3040c2" exitCode=0 Jan 30 08:30:44 crc kubenswrapper[4870]: I0130 08:30:44.836275 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-kzgv2" event={"ID":"d5925267-e75f-4398-af96-6856710c57f3","Type":"ContainerDied","Data":"895f5e0a2008516657010356d30e83d3b79850fdf910e5ede0c0b5280b3040c2"} Jan 30 08:30:46 crc kubenswrapper[4870]: I0130 08:30:46.866220 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-kzgv2" event={"ID":"d5925267-e75f-4398-af96-6856710c57f3","Type":"ContainerStarted","Data":"fa09eeaef0e8d067370ba4e9a769247437b75f3bbc783ef72b9b39a713b37db0"} Jan 30 08:30:47 crc kubenswrapper[4870]: I0130 08:30:47.881031 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:47 crc kubenswrapper[4870]: I0130 08:30:47.929526 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7777964479-kzgv2" podStartSLOduration=6.929496956 podStartE2EDuration="6.929496956s" podCreationTimestamp="2026-01-30 08:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:47.900271358 +0000 UTC m=+1286.595818497" watchObservedRunningTime="2026-01-30 08:30:47.929496956 +0000 UTC m=+1286.625044075" Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.735648 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.814644 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-zzkss"] Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.815365 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" podUID="3688605b-306e-4093-93d5-b96cae2a80de" containerName="dnsmasq-dns" containerID="cri-o://0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e" gracePeriod=10 Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.942036 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35497556-3464-49d4-9dc2-8f8153a1db82","Type":"ContainerStarted","Data":"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a"} Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.944845 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4130fea-be36-47f2-9940-fd3bddcbe3c5","Type":"ContainerStarted","Data":"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11"} Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.946567 4870 generic.go:334] "Generic (PLEG): container finished" podID="8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" containerID="d505e04dc454937c02de4ea80fb1b30e9ec281deb651bdc207ab606295f95619" exitCode=0 Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.946644 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vrk8x" event={"ID":"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5","Type":"ContainerDied","Data":"d505e04dc454937c02de4ea80fb1b30e9ec281deb651bdc207ab606295f95619"} Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.947809 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21f6c18c-fcc7-4bd5-9a86-81dacd111e90","Type":"ContainerStarted","Data":"cf022953959b6108a335c22e59a92909beccc351b6ace66848278caae812affb"} Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.948000 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="21f6c18c-fcc7-4bd5-9a86-81dacd111e90" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://cf022953959b6108a335c22e59a92909beccc351b6ace66848278caae812affb" gracePeriod=30 Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.959669 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c987d2-eb6f-4ad7-a6b3-97181526dc24","Type":"ContainerStarted","Data":"eeeda60427741389f29ce7682e68db91dead5b11853ceb5b520159a71514e942"} Jan 30 08:30:51 crc kubenswrapper[4870]: I0130 08:30:51.991979 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.125504899 podStartE2EDuration="11.991962834s" podCreationTimestamp="2026-01-30 08:30:40 +0000 UTC" firstStartedPulling="2026-01-30 08:30:42.172226489 +0000 UTC m=+1280.867773598" lastFinishedPulling="2026-01-30 08:30:51.038684394 +0000 UTC m=+1289.734231533" observedRunningTime="2026-01-30 08:30:51.982671293 +0000 UTC m=+1290.678218402" watchObservedRunningTime="2026-01-30 08:30:51.991962834 +0000 UTC m=+1290.687509933" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.007575 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.95936472 podStartE2EDuration="11.007556975s" podCreationTimestamp="2026-01-30 08:30:41 +0000 UTC" firstStartedPulling="2026-01-30 08:30:42.46189952 +0000 UTC m=+1281.157446629" lastFinishedPulling="2026-01-30 08:30:51.510091735 +0000 UTC m=+1290.205638884" observedRunningTime="2026-01-30 08:30:52.001085781 +0000 UTC m=+1290.696632890" watchObservedRunningTime="2026-01-30 08:30:52.007556975 +0000 UTC m=+1290.703104084" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.451669 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.584987 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-nb\") pod \"3688605b-306e-4093-93d5-b96cae2a80de\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.585354 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-swift-storage-0\") pod \"3688605b-306e-4093-93d5-b96cae2a80de\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.585376 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xpwr\" (UniqueName: \"kubernetes.io/projected/3688605b-306e-4093-93d5-b96cae2a80de-kube-api-access-6xpwr\") pod \"3688605b-306e-4093-93d5-b96cae2a80de\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.585551 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-svc\") pod \"3688605b-306e-4093-93d5-b96cae2a80de\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.585590 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-sb\") pod \"3688605b-306e-4093-93d5-b96cae2a80de\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.585619 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-config\") pod \"3688605b-306e-4093-93d5-b96cae2a80de\" (UID: \"3688605b-306e-4093-93d5-b96cae2a80de\") " Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.598087 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3688605b-306e-4093-93d5-b96cae2a80de-kube-api-access-6xpwr" (OuterVolumeSpecName: "kube-api-access-6xpwr") pod "3688605b-306e-4093-93d5-b96cae2a80de" (UID: "3688605b-306e-4093-93d5-b96cae2a80de"). InnerVolumeSpecName "kube-api-access-6xpwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.642683 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3688605b-306e-4093-93d5-b96cae2a80de" (UID: "3688605b-306e-4093-93d5-b96cae2a80de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.655803 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-config" (OuterVolumeSpecName: "config") pod "3688605b-306e-4093-93d5-b96cae2a80de" (UID: "3688605b-306e-4093-93d5-b96cae2a80de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.658463 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3688605b-306e-4093-93d5-b96cae2a80de" (UID: "3688605b-306e-4093-93d5-b96cae2a80de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.662898 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3688605b-306e-4093-93d5-b96cae2a80de" (UID: "3688605b-306e-4093-93d5-b96cae2a80de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.678155 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3688605b-306e-4093-93d5-b96cae2a80de" (UID: "3688605b-306e-4093-93d5-b96cae2a80de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.687999 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.688106 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.688123 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.688133 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.688144 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xpwr\" (UniqueName: \"kubernetes.io/projected/3688605b-306e-4093-93d5-b96cae2a80de-kube-api-access-6xpwr\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.688154 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3688605b-306e-4093-93d5-b96cae2a80de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.993704 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35497556-3464-49d4-9dc2-8f8153a1db82","Type":"ContainerStarted","Data":"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e"} Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.999431 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4130fea-be36-47f2-9940-fd3bddcbe3c5","Type":"ContainerStarted","Data":"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87"} Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.999615 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-log" containerID="cri-o://e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11" gracePeriod=30 Jan 30 08:30:52 crc kubenswrapper[4870]: I0130 08:30:52.999646 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-metadata" containerID="cri-o://690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87" gracePeriod=30 Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.003531 4870 generic.go:334] "Generic (PLEG): container finished" podID="3688605b-306e-4093-93d5-b96cae2a80de" containerID="0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e" exitCode=0 Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.003602 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" event={"ID":"3688605b-306e-4093-93d5-b96cae2a80de","Type":"ContainerDied","Data":"0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e"} Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.003645 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.003657 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d675956bc-zzkss" event={"ID":"3688605b-306e-4093-93d5-b96cae2a80de","Type":"ContainerDied","Data":"38880a59bd8e8ad87943840cbaa98251faf8d234264c0ca4ae49cc1e495e8ef5"} Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.003681 4870 scope.go:117] "RemoveContainer" containerID="0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.036864 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.55776425 podStartE2EDuration="12.032859009s" podCreationTimestamp="2026-01-30 08:30:41 +0000 UTC" firstStartedPulling="2026-01-30 08:30:42.574822698 +0000 UTC m=+1281.270369807" lastFinishedPulling="2026-01-30 08:30:51.049917457 +0000 UTC m=+1289.745464566" observedRunningTime="2026-01-30 08:30:53.030235777 +0000 UTC m=+1291.725782906" watchObservedRunningTime="2026-01-30 08:30:53.032859009 +0000 UTC m=+1291.728406138" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.049419 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.112773369 podStartE2EDuration="12.049396308s" podCreationTimestamp="2026-01-30 08:30:41 +0000 UTC" firstStartedPulling="2026-01-30 08:30:42.577692218 +0000 UTC m=+1281.273239327" lastFinishedPulling="2026-01-30 08:30:51.514315137 +0000 UTC m=+1290.209862266" observedRunningTime="2026-01-30 08:30:53.047115816 +0000 UTC m=+1291.742662935" watchObservedRunningTime="2026-01-30 08:30:53.049396308 +0000 UTC m=+1291.744943417" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.055480 4870 scope.go:117] "RemoveContainer" containerID="e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.074393 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-zzkss"] Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.083998 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d675956bc-zzkss"] Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.121100 4870 scope.go:117] "RemoveContainer" containerID="0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e" Jan 30 08:30:53 crc kubenswrapper[4870]: E0130 08:30:53.121712 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e\": container with ID starting with 0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e not found: ID does not exist" containerID="0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.121843 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e"} err="failed to get container status \"0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e\": rpc error: code = NotFound desc = could not find container \"0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e\": container with ID starting with 0e54e3d5282b98c208b0bfac76119dbcb9798dc658f490c1212de7282601c32e not found: ID does not exist" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.121962 4870 scope.go:117] "RemoveContainer" containerID="e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8" Jan 30 08:30:53 crc kubenswrapper[4870]: E0130 08:30:53.122368 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8\": container with ID starting with e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8 not found: ID does not exist" containerID="e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.122405 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8"} err="failed to get container status \"e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8\": rpc error: code = NotFound desc = could not find container \"e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8\": container with ID starting with e42b42a5c9ab862c026277e738c25344a1c208eaa58c5cd18419418c9ff99fc8 not found: ID does not exist" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.601047 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.610916 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709149 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-combined-ca-bundle\") pod \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709220 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2hkv\" (UniqueName: \"kubernetes.io/projected/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-kube-api-access-m2hkv\") pod \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709243 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-combined-ca-bundle\") pod \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709259 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-scripts\") pod \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709275 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4130fea-be36-47f2-9940-fd3bddcbe3c5-logs\") pod \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709344 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-config-data\") pod \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709487 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-config-data\") pod \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\" (UID: \"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.709552 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czf2b\" (UniqueName: \"kubernetes.io/projected/a4130fea-be36-47f2-9940-fd3bddcbe3c5-kube-api-access-czf2b\") pod \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\" (UID: \"a4130fea-be36-47f2-9940-fd3bddcbe3c5\") " Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.710367 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4130fea-be36-47f2-9940-fd3bddcbe3c5-logs" (OuterVolumeSpecName: "logs") pod "a4130fea-be36-47f2-9940-fd3bddcbe3c5" (UID: "a4130fea-be36-47f2-9940-fd3bddcbe3c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.710616 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4130fea-be36-47f2-9940-fd3bddcbe3c5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.714467 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-kube-api-access-m2hkv" (OuterVolumeSpecName: "kube-api-access-m2hkv") pod "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" (UID: "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5"). InnerVolumeSpecName "kube-api-access-m2hkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.715985 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-scripts" (OuterVolumeSpecName: "scripts") pod "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" (UID: "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.722762 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4130fea-be36-47f2-9940-fd3bddcbe3c5-kube-api-access-czf2b" (OuterVolumeSpecName: "kube-api-access-czf2b") pod "a4130fea-be36-47f2-9940-fd3bddcbe3c5" (UID: "a4130fea-be36-47f2-9940-fd3bddcbe3c5"). InnerVolumeSpecName "kube-api-access-czf2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.739381 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4130fea-be36-47f2-9940-fd3bddcbe3c5" (UID: "a4130fea-be36-47f2-9940-fd3bddcbe3c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.744840 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" (UID: "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.747027 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-config-data" (OuterVolumeSpecName: "config-data") pod "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" (UID: "8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.749119 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-config-data" (OuterVolumeSpecName: "config-data") pod "a4130fea-be36-47f2-9940-fd3bddcbe3c5" (UID: "a4130fea-be36-47f2-9940-fd3bddcbe3c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813192 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2hkv\" (UniqueName: \"kubernetes.io/projected/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-kube-api-access-m2hkv\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813240 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813263 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813282 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813305 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813331 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czf2b\" (UniqueName: \"kubernetes.io/projected/a4130fea-be36-47f2-9940-fd3bddcbe3c5-kube-api-access-czf2b\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:53 crc kubenswrapper[4870]: I0130 08:30:53.813356 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4130fea-be36-47f2-9940-fd3bddcbe3c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019657 4870 generic.go:334] "Generic (PLEG): container finished" podID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerID="690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87" exitCode=0 Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019687 4870 generic.go:334] "Generic (PLEG): container finished" podID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerID="e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11" exitCode=143 Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019719 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4130fea-be36-47f2-9940-fd3bddcbe3c5","Type":"ContainerDied","Data":"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87"} Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019770 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4130fea-be36-47f2-9940-fd3bddcbe3c5","Type":"ContainerDied","Data":"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11"} Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019782 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4130fea-be36-47f2-9940-fd3bddcbe3c5","Type":"ContainerDied","Data":"2bab53308716b22e97e1a942ac95ceca0d270b7f4b42f3d5be8a0178321b83a8"} Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019799 4870 scope.go:117] "RemoveContainer" containerID="690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.019816 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.027654 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vrk8x" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.028698 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vrk8x" event={"ID":"8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5","Type":"ContainerDied","Data":"cabf47e5c9ce25b5156d8706dcfa4a8797c2b4154837e6413d0ebe4fba915dd9"} Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.028735 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cabf47e5c9ce25b5156d8706dcfa4a8797c2b4154837e6413d0ebe4fba915dd9" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.051943 4870 scope.go:117] "RemoveContainer" containerID="e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.097280 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3688605b-306e-4093-93d5-b96cae2a80de" path="/var/lib/kubelet/pods/3688605b-306e-4093-93d5-b96cae2a80de/volumes" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.146041 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.181680 4870 scope.go:117] "RemoveContainer" containerID="690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87" Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.182320 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87\": container with ID starting with 690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87 not found: ID does not exist" containerID="690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.182357 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87"} err="failed to get container status \"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87\": rpc error: code = NotFound desc = could not find container \"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87\": container with ID starting with 690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87 not found: ID does not exist" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.182379 4870 scope.go:117] "RemoveContainer" containerID="e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11" Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.182582 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11\": container with ID starting with e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11 not found: ID does not exist" containerID="e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.182605 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11"} err="failed to get container status \"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11\": rpc error: code = NotFound desc = could not find container \"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11\": container with ID starting with e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11 not found: ID does not exist" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.182619 4870 scope.go:117] "RemoveContainer" containerID="690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.182850 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87"} err="failed to get container status \"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87\": rpc error: code = NotFound desc = could not find container \"690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87\": container with ID starting with 690f1c21a74988a80744d4141420ba5d3d5d318a18b2e422108e6e5ee3f7be87 not found: ID does not exist" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.182974 4870 scope.go:117] "RemoveContainer" containerID="e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.183318 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11"} err="failed to get container status \"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11\": rpc error: code = NotFound desc = could not find container \"e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11\": container with ID starting with e040ac1b4d2768ad2859ffb22248f63acb61fa92fe83ab4891bb419e25c4da11 not found: ID does not exist" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.185386 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.185625 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7c987d2-eb6f-4ad7-a6b3-97181526dc24" containerName="nova-scheduler-scheduler" containerID="cri-o://eeeda60427741389f29ce7682e68db91dead5b11853ceb5b520159a71514e942" gracePeriod=30 Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.200639 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.214466 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.224775 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.225350 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3688605b-306e-4093-93d5-b96cae2a80de" containerName="dnsmasq-dns" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225381 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3688605b-306e-4093-93d5-b96cae2a80de" containerName="dnsmasq-dns" Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.225410 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-metadata" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225420 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-metadata" Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.225457 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3688605b-306e-4093-93d5-b96cae2a80de" containerName="init" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225466 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3688605b-306e-4093-93d5-b96cae2a80de" containerName="init" Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.225486 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-log" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225494 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-log" Jan 30 08:30:54 crc kubenswrapper[4870]: E0130 08:30:54.225517 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" containerName="nova-manage" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225525 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" containerName="nova-manage" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225770 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-log" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225802 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" containerName="nova-manage" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225820 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3688605b-306e-4093-93d5-b96cae2a80de" containerName="dnsmasq-dns" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.225841 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" containerName="nova-metadata-metadata" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.227389 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.229613 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.229859 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.233178 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.330706 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdab968-579c-4189-87c5-05bad5469d6c-logs\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.330773 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.330821 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd5ql\" (UniqueName: \"kubernetes.io/projected/dcdab968-579c-4189-87c5-05bad5469d6c-kube-api-access-hd5ql\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.330862 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.330917 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-config-data\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.433087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdab968-579c-4189-87c5-05bad5469d6c-logs\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.433280 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.433414 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd5ql\" (UniqueName: \"kubernetes.io/projected/dcdab968-579c-4189-87c5-05bad5469d6c-kube-api-access-hd5ql\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.433547 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.433591 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdab968-579c-4189-87c5-05bad5469d6c-logs\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.433732 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-config-data\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.437236 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.437856 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.439460 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-config-data\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.457235 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd5ql\" (UniqueName: \"kubernetes.io/projected/dcdab968-579c-4189-87c5-05bad5469d6c-kube-api-access-hd5ql\") pod \"nova-metadata-0\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " pod="openstack/nova-metadata-0" Jan 30 08:30:54 crc kubenswrapper[4870]: I0130 08:30:54.549527 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.039020 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-log" containerID="cri-o://f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a" gracePeriod=30 Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.039191 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-api" containerID="cri-o://3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e" gracePeriod=30 Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.063302 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.602408 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.653431 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35497556-3464-49d4-9dc2-8f8153a1db82-logs\") pod \"35497556-3464-49d4-9dc2-8f8153a1db82\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.653513 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-combined-ca-bundle\") pod \"35497556-3464-49d4-9dc2-8f8153a1db82\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.653590 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qwkz\" (UniqueName: \"kubernetes.io/projected/35497556-3464-49d4-9dc2-8f8153a1db82-kube-api-access-9qwkz\") pod \"35497556-3464-49d4-9dc2-8f8153a1db82\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.653622 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-config-data\") pod \"35497556-3464-49d4-9dc2-8f8153a1db82\" (UID: \"35497556-3464-49d4-9dc2-8f8153a1db82\") " Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.653893 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35497556-3464-49d4-9dc2-8f8153a1db82-logs" (OuterVolumeSpecName: "logs") pod "35497556-3464-49d4-9dc2-8f8153a1db82" (UID: "35497556-3464-49d4-9dc2-8f8153a1db82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.654152 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35497556-3464-49d4-9dc2-8f8153a1db82-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.657246 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35497556-3464-49d4-9dc2-8f8153a1db82-kube-api-access-9qwkz" (OuterVolumeSpecName: "kube-api-access-9qwkz") pod "35497556-3464-49d4-9dc2-8f8153a1db82" (UID: "35497556-3464-49d4-9dc2-8f8153a1db82"). InnerVolumeSpecName "kube-api-access-9qwkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.678795 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-config-data" (OuterVolumeSpecName: "config-data") pod "35497556-3464-49d4-9dc2-8f8153a1db82" (UID: "35497556-3464-49d4-9dc2-8f8153a1db82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.679971 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35497556-3464-49d4-9dc2-8f8153a1db82" (UID: "35497556-3464-49d4-9dc2-8f8153a1db82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.756488 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.756725 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qwkz\" (UniqueName: \"kubernetes.io/projected/35497556-3464-49d4-9dc2-8f8153a1db82-kube-api-access-9qwkz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:55 crc kubenswrapper[4870]: I0130 08:30:55.756797 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35497556-3464-49d4-9dc2-8f8153a1db82-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054229 4870 generic.go:334] "Generic (PLEG): container finished" podID="35497556-3464-49d4-9dc2-8f8153a1db82" containerID="3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e" exitCode=0 Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054641 4870 generic.go:334] "Generic (PLEG): container finished" podID="35497556-3464-49d4-9dc2-8f8153a1db82" containerID="f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a" exitCode=143 Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054296 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054318 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35497556-3464-49d4-9dc2-8f8153a1db82","Type":"ContainerDied","Data":"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e"} Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054831 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35497556-3464-49d4-9dc2-8f8153a1db82","Type":"ContainerDied","Data":"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a"} Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054923 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35497556-3464-49d4-9dc2-8f8153a1db82","Type":"ContainerDied","Data":"4241ed3c0f6370f180f8f986d85580b542b235a9c663a81abf39c2245d59012a"} Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.054956 4870 scope.go:117] "RemoveContainer" containerID="3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.064978 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dcdab968-579c-4189-87c5-05bad5469d6c","Type":"ContainerStarted","Data":"366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82"} Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.065082 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dcdab968-579c-4189-87c5-05bad5469d6c","Type":"ContainerStarted","Data":"d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161"} Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.065110 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dcdab968-579c-4189-87c5-05bad5469d6c","Type":"ContainerStarted","Data":"01a2218e5e8fc92f78dfaa53cf3e950822f8a4a9869c349d33052df56fc52370"} Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.102098 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.102033179 podStartE2EDuration="2.102033179s" podCreationTimestamp="2026-01-30 08:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:56.094708699 +0000 UTC m=+1294.790255848" watchObservedRunningTime="2026-01-30 08:30:56.102033179 +0000 UTC m=+1294.797580348" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.107535 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4130fea-be36-47f2-9940-fd3bddcbe3c5" path="/var/lib/kubelet/pods/a4130fea-be36-47f2-9940-fd3bddcbe3c5/volumes" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.162310 4870 scope.go:117] "RemoveContainer" containerID="f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.199169 4870 scope.go:117] "RemoveContainer" containerID="3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e" Jan 30 08:30:56 crc kubenswrapper[4870]: E0130 08:30:56.200108 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e\": container with ID starting with 3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e not found: ID does not exist" containerID="3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.200194 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e"} err="failed to get container status \"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e\": rpc error: code = NotFound desc = could not find container \"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e\": container with ID starting with 3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e not found: ID does not exist" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.200244 4870 scope.go:117] "RemoveContainer" containerID="f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a" Jan 30 08:30:56 crc kubenswrapper[4870]: E0130 08:30:56.201305 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a\": container with ID starting with f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a not found: ID does not exist" containerID="f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.201357 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a"} err="failed to get container status \"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a\": rpc error: code = NotFound desc = could not find container \"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a\": container with ID starting with f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a not found: ID does not exist" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.201393 4870 scope.go:117] "RemoveContainer" containerID="3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.201828 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e"} err="failed to get container status \"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e\": rpc error: code = NotFound desc = could not find container \"3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e\": container with ID starting with 3fe2bc5983328951bf21fed89d98d84320282998186c3a7ff39b45c9f020cb9e not found: ID does not exist" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.201862 4870 scope.go:117] "RemoveContainer" containerID="f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.202277 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a"} err="failed to get container status \"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a\": rpc error: code = NotFound desc = could not find container \"f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a\": container with ID starting with f6e6311173c91fb7efc38adadec81d62d3654a13f033876ef586eb719910253a not found: ID does not exist" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.367368 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:30:56 crc kubenswrapper[4870]: I0130 08:30:56.576740 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:30:59 crc kubenswrapper[4870]: I0130 08:30:59.550478 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:30:59 crc kubenswrapper[4870]: I0130 08:30:59.551116 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:31:00 crc kubenswrapper[4870]: I0130 08:31:00.123925 4870 generic.go:334] "Generic (PLEG): container finished" podID="df7d1e35-e72c-4a05-8a4a-89647f93a26c" containerID="8042d48e5c92127670e964e37628c795eb4a833864e07f5be3a23644c40ab2aa" exitCode=0 Jan 30 08:31:00 crc kubenswrapper[4870]: I0130 08:31:00.123987 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m882v" event={"ID":"df7d1e35-e72c-4a05-8a4a-89647f93a26c","Type":"ContainerDied","Data":"8042d48e5c92127670e964e37628c795eb4a833864e07f5be3a23644c40ab2aa"} Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.081392 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.535581 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.585854 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-scripts\") pod \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.585985 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2s2k\" (UniqueName: \"kubernetes.io/projected/df7d1e35-e72c-4a05-8a4a-89647f93a26c-kube-api-access-d2s2k\") pod \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.586190 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-config-data\") pod \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.586263 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-combined-ca-bundle\") pod \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\" (UID: \"df7d1e35-e72c-4a05-8a4a-89647f93a26c\") " Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.594415 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-scripts" (OuterVolumeSpecName: "scripts") pod "df7d1e35-e72c-4a05-8a4a-89647f93a26c" (UID: "df7d1e35-e72c-4a05-8a4a-89647f93a26c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.594481 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7d1e35-e72c-4a05-8a4a-89647f93a26c-kube-api-access-d2s2k" (OuterVolumeSpecName: "kube-api-access-d2s2k") pod "df7d1e35-e72c-4a05-8a4a-89647f93a26c" (UID: "df7d1e35-e72c-4a05-8a4a-89647f93a26c"). InnerVolumeSpecName "kube-api-access-d2s2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.626476 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-config-data" (OuterVolumeSpecName: "config-data") pod "df7d1e35-e72c-4a05-8a4a-89647f93a26c" (UID: "df7d1e35-e72c-4a05-8a4a-89647f93a26c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.639485 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7d1e35-e72c-4a05-8a4a-89647f93a26c" (UID: "df7d1e35-e72c-4a05-8a4a-89647f93a26c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.688221 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.688254 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.688268 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7d1e35-e72c-4a05-8a4a-89647f93a26c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:01 crc kubenswrapper[4870]: I0130 08:31:01.688280 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2s2k\" (UniqueName: \"kubernetes.io/projected/df7d1e35-e72c-4a05-8a4a-89647f93a26c-kube-api-access-d2s2k\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.149545 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m882v" event={"ID":"df7d1e35-e72c-4a05-8a4a-89647f93a26c","Type":"ContainerDied","Data":"1180a868984313de66495d96e16fbf7b9e7f4d89169ce27b87d1df78e6c1cbcf"} Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.149584 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1180a868984313de66495d96e16fbf7b9e7f4d89169ce27b87d1df78e6c1cbcf" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.149647 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m882v" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.236845 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:31:02 crc kubenswrapper[4870]: E0130 08:31:02.237321 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-api" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.237346 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-api" Jan 30 08:31:02 crc kubenswrapper[4870]: E0130 08:31:02.237362 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-log" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.237370 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-log" Jan 30 08:31:02 crc kubenswrapper[4870]: E0130 08:31:02.237405 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7d1e35-e72c-4a05-8a4a-89647f93a26c" containerName="nova-cell1-conductor-db-sync" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.237415 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7d1e35-e72c-4a05-8a4a-89647f93a26c" containerName="nova-cell1-conductor-db-sync" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.237639 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-log" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.237670 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7d1e35-e72c-4a05-8a4a-89647f93a26c" containerName="nova-cell1-conductor-db-sync" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.237693 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" containerName="nova-api-api" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.238551 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.241099 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.257008 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.298897 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5686258-ed50-49a1-920b-77e9bbe01c55-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.299035 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5686258-ed50-49a1-920b-77e9bbe01c55-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.299078 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shltt\" (UniqueName: \"kubernetes.io/projected/e5686258-ed50-49a1-920b-77e9bbe01c55-kube-api-access-shltt\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.400311 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5686258-ed50-49a1-920b-77e9bbe01c55-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.400589 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shltt\" (UniqueName: \"kubernetes.io/projected/e5686258-ed50-49a1-920b-77e9bbe01c55-kube-api-access-shltt\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.400764 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5686258-ed50-49a1-920b-77e9bbe01c55-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.407463 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5686258-ed50-49a1-920b-77e9bbe01c55-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.408463 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5686258-ed50-49a1-920b-77e9bbe01c55-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.422746 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shltt\" (UniqueName: \"kubernetes.io/projected/e5686258-ed50-49a1-920b-77e9bbe01c55-kube-api-access-shltt\") pod \"nova-cell1-conductor-0\" (UID: \"e5686258-ed50-49a1-920b-77e9bbe01c55\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:02 crc kubenswrapper[4870]: I0130 08:31:02.556965 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:03 crc kubenswrapper[4870]: I0130 08:31:03.074160 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:31:03 crc kubenswrapper[4870]: I0130 08:31:03.161075 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5686258-ed50-49a1-920b-77e9bbe01c55","Type":"ContainerStarted","Data":"9bc3cf99ea43486766ec0582617eb0c3869fb9fd8ea7d7d2478a53cfa64c25fa"} Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.174413 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5686258-ed50-49a1-920b-77e9bbe01c55","Type":"ContainerStarted","Data":"702865b8243cf2dc5d72020ba917c8fd76ea4d8cb4669689a4569fd86a0eaeb1"} Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.175935 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.195246 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.195226038 podStartE2EDuration="2.195226038s" podCreationTimestamp="2026-01-30 08:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:04.187592868 +0000 UTC m=+1302.883139987" watchObservedRunningTime="2026-01-30 08:31:04.195226038 +0000 UTC m=+1302.890773167" Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.559244 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.559315 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.871333 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:31:04 crc kubenswrapper[4870]: I0130 08:31:04.871560 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" containerName="kube-state-metrics" containerID="cri-o://9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172" gracePeriod=30 Jan 30 08:31:05 crc kubenswrapper[4870]: E0130 08:31:05.092798 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd3b1e9c_90bb_46b7_8e19_edc1388b2a67.slice/crio-conmon-9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd3b1e9c_90bb_46b7_8e19_edc1388b2a67.slice/crio-9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.207043 4870 generic.go:334] "Generic (PLEG): container finished" podID="dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" containerID="9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172" exitCode=2 Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.208089 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67","Type":"ContainerDied","Data":"9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172"} Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.436644 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.464981 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckp8w\" (UniqueName: \"kubernetes.io/projected/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67-kube-api-access-ckp8w\") pod \"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67\" (UID: \"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67\") " Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.478103 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67-kube-api-access-ckp8w" (OuterVolumeSpecName: "kube-api-access-ckp8w") pod "dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" (UID: "dd3b1e9c-90bb-46b7-8e19-edc1388b2a67"). InnerVolumeSpecName "kube-api-access-ckp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.568943 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckp8w\" (UniqueName: \"kubernetes.io/projected/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67-kube-api-access-ckp8w\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.578154 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:05 crc kubenswrapper[4870]: I0130 08:31:05.578204 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.221441 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.222276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dd3b1e9c-90bb-46b7-8e19-edc1388b2a67","Type":"ContainerDied","Data":"e0a5d30d3c77180ecb50a167671b3d1f7955018f6057ec5124abb113c1fa8b6f"} Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.222320 4870 scope.go:117] "RemoveContainer" containerID="9f3acd6bbada01f386eced240ac88a963b5d6446c306ab4b66863d5ccf3e1172" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.254799 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.274819 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.284359 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:31:06 crc kubenswrapper[4870]: E0130 08:31:06.285002 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" containerName="kube-state-metrics" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.285024 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" containerName="kube-state-metrics" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.285282 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" containerName="kube-state-metrics" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.286162 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.288077 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.288281 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.293220 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.382354 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfth5\" (UniqueName: \"kubernetes.io/projected/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-api-access-vfth5\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.382639 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.382692 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.382743 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.485995 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.486301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfth5\" (UniqueName: \"kubernetes.io/projected/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-api-access-vfth5\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.486397 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.486533 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.490482 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.494284 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.506652 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.508398 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfth5\" (UniqueName: \"kubernetes.io/projected/0deb54ca-48c2-4b35-88c0-dbad5e8b9272-kube-api-access-vfth5\") pod \"kube-state-metrics-0\" (UID: \"0deb54ca-48c2-4b35-88c0-dbad5e8b9272\") " pod="openstack/kube-state-metrics-0" Jan 30 08:31:06 crc kubenswrapper[4870]: I0130 08:31:06.650247 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.083963 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.238079 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0deb54ca-48c2-4b35-88c0-dbad5e8b9272","Type":"ContainerStarted","Data":"7db1ca73141d6540be76da681f4d46da24d920385590ee7f38fa0088a26be648"} Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.260454 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.260725 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-central-agent" containerID="cri-o://0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb" gracePeriod=30 Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.260863 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="proxy-httpd" containerID="cri-o://722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394" gracePeriod=30 Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.260924 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="sg-core" containerID="cri-o://e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3" gracePeriod=30 Jan 30 08:31:07 crc kubenswrapper[4870]: I0130 08:31:07.260956 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-notification-agent" containerID="cri-o://b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd" gracePeriod=30 Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.096106 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3b1e9c-90bb-46b7-8e19-edc1388b2a67" path="/var/lib/kubelet/pods/dd3b1e9c-90bb-46b7-8e19-edc1388b2a67/volumes" Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.253810 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0deb54ca-48c2-4b35-88c0-dbad5e8b9272","Type":"ContainerStarted","Data":"d97bf8f7ba290c5c6467f1cebe12d605a85b69530b2a8da28bee0985109cd9a4"} Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.254990 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.257084 4870 generic.go:334] "Generic (PLEG): container finished" podID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerID="722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394" exitCode=0 Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.257127 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerDied","Data":"722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394"} Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.257140 4870 generic.go:334] "Generic (PLEG): container finished" podID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerID="e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3" exitCode=2 Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.257156 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerDied","Data":"e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3"} Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.257161 4870 generic.go:334] "Generic (PLEG): container finished" podID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerID="0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb" exitCode=0 Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.257167 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerDied","Data":"0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb"} Jan 30 08:31:08 crc kubenswrapper[4870]: I0130 08:31:08.285541 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.904433029 podStartE2EDuration="2.285511231s" podCreationTimestamp="2026-01-30 08:31:06 +0000 UTC" firstStartedPulling="2026-01-30 08:31:07.097482264 +0000 UTC m=+1305.793029373" lastFinishedPulling="2026-01-30 08:31:07.478560466 +0000 UTC m=+1306.174107575" observedRunningTime="2026-01-30 08:31:08.273436971 +0000 UTC m=+1306.968984140" watchObservedRunningTime="2026-01-30 08:31:08.285511231 +0000 UTC m=+1306.981058380" Jan 30 08:31:12 crc kubenswrapper[4870]: I0130 08:31:12.595203 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.838325 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.950491 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2wxb\" (UniqueName: \"kubernetes.io/projected/f11d5abc-9e24-41c5-9e26-22a939d70180-kube-api-access-l2wxb\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.950773 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-config-data\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.950899 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-combined-ca-bundle\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.950997 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-sg-core-conf-yaml\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.951100 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-scripts\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.951300 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-run-httpd\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.951397 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-log-httpd\") pod \"f11d5abc-9e24-41c5-9e26-22a939d70180\" (UID: \"f11d5abc-9e24-41c5-9e26-22a939d70180\") " Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.952146 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.952775 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.956485 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11d5abc-9e24-41c5-9e26-22a939d70180-kube-api-access-l2wxb" (OuterVolumeSpecName: "kube-api-access-l2wxb") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "kube-api-access-l2wxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.966147 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-scripts" (OuterVolumeSpecName: "scripts") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:13 crc kubenswrapper[4870]: I0130 08:31:13.981158 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.029181 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.053781 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-config-data" (OuterVolumeSpecName: "config-data") pod "f11d5abc-9e24-41c5-9e26-22a939d70180" (UID: "f11d5abc-9e24-41c5-9e26-22a939d70180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054200 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054248 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f11d5abc-9e24-41c5-9e26-22a939d70180-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054259 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2wxb\" (UniqueName: \"kubernetes.io/projected/f11d5abc-9e24-41c5-9e26-22a939d70180-kube-api-access-l2wxb\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054271 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054283 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054291 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.054322 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f11d5abc-9e24-41c5-9e26-22a939d70180-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.327654 4870 generic.go:334] "Generic (PLEG): container finished" podID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerID="b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd" exitCode=0 Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.327768 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerDied","Data":"b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd"} Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.327809 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.328211 4870 scope.go:117] "RemoveContainer" containerID="722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.328116 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f11d5abc-9e24-41c5-9e26-22a939d70180","Type":"ContainerDied","Data":"0c2a4eddab15cef0cbeaace28cc33784a69c48cca42cc8364520ed9bf9b84959"} Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.359103 4870 scope.go:117] "RemoveContainer" containerID="e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.369268 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.401246 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.413133 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.413699 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-central-agent" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.413724 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-central-agent" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.413746 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="sg-core" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.413755 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="sg-core" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.413780 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="proxy-httpd" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.413792 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="proxy-httpd" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.413808 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-notification-agent" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.413816 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-notification-agent" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.416405 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-notification-agent" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.416426 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="proxy-httpd" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.416446 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="sg-core" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.416459 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" containerName="ceilometer-central-agent" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.418266 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.420872 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.421294 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.422360 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.462048 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.468347 4870 scope.go:117] "RemoveContainer" containerID="b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.490039 4870 scope.go:117] "RemoveContainer" containerID="0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.517604 4870 scope.go:117] "RemoveContainer" containerID="722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.518410 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394\": container with ID starting with 722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394 not found: ID does not exist" containerID="722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.518444 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394"} err="failed to get container status \"722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394\": rpc error: code = NotFound desc = could not find container \"722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394\": container with ID starting with 722f92a03dfb1268adbc4665dd21c44e9be63f5363a6d3d067feb62e2ae20394 not found: ID does not exist" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.518466 4870 scope.go:117] "RemoveContainer" containerID="e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.518971 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3\": container with ID starting with e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3 not found: ID does not exist" containerID="e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.519000 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3"} err="failed to get container status \"e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3\": rpc error: code = NotFound desc = could not find container \"e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3\": container with ID starting with e877fde2e65db2c46b4cbcbdc12f0a69875cd4efef2febf80a45b73912cd72f3 not found: ID does not exist" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.519015 4870 scope.go:117] "RemoveContainer" containerID="b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.519349 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd\": container with ID starting with b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd not found: ID does not exist" containerID="b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.519371 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd"} err="failed to get container status \"b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd\": rpc error: code = NotFound desc = could not find container \"b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd\": container with ID starting with b3b257aee6689f65a0ce66059c2ab16b201b35c4811c0b35bb99d22792234acd not found: ID does not exist" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.519384 4870 scope.go:117] "RemoveContainer" containerID="0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb" Jan 30 08:31:14 crc kubenswrapper[4870]: E0130 08:31:14.519771 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb\": container with ID starting with 0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb not found: ID does not exist" containerID="0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.519808 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb"} err="failed to get container status \"0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb\": rpc error: code = NotFound desc = could not find container \"0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb\": container with ID starting with 0440343c5e151c4584f5e06526e461e0ed6bbe86b28f995cc6f02e8fb5a22cdb not found: ID does not exist" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.566527 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnc7j\" (UniqueName: \"kubernetes.io/projected/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-kube-api-access-hnc7j\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.566698 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.566792 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-scripts\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.566836 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-config-data\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.566935 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.566984 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.567124 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.567323 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.568405 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.573682 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.573821 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669136 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669203 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-scripts\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669229 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-config-data\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669330 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669353 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669378 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669417 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669473 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnc7j\" (UniqueName: \"kubernetes.io/projected/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-kube-api-access-hnc7j\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.669735 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.670056 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.675303 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.675711 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-scripts\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.676497 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.685467 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.686070 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-config-data\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.690334 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnc7j\" (UniqueName: \"kubernetes.io/projected/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-kube-api-access-hnc7j\") pod \"ceilometer-0\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " pod="openstack/ceilometer-0" Jan 30 08:31:14 crc kubenswrapper[4870]: I0130 08:31:14.759192 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:31:15 crc kubenswrapper[4870]: I0130 08:31:15.273455 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:31:15 crc kubenswrapper[4870]: W0130 08:31:15.273625 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5bff0c_1f97_4c2d_9f95_46c7c3799d27.slice/crio-878d9967ec40148177071222b9ab0dc547347647130442c6f0bf2d56dd31e4d4 WatchSource:0}: Error finding container 878d9967ec40148177071222b9ab0dc547347647130442c6f0bf2d56dd31e4d4: Status 404 returned error can't find the container with id 878d9967ec40148177071222b9ab0dc547347647130442c6f0bf2d56dd31e4d4 Jan 30 08:31:15 crc kubenswrapper[4870]: I0130 08:31:15.339101 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerStarted","Data":"878d9967ec40148177071222b9ab0dc547347647130442c6f0bf2d56dd31e4d4"} Jan 30 08:31:15 crc kubenswrapper[4870]: I0130 08:31:15.347184 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:31:16 crc kubenswrapper[4870]: I0130 08:31:16.088083 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11d5abc-9e24-41c5-9e26-22a939d70180" path="/var/lib/kubelet/pods/f11d5abc-9e24-41c5-9e26-22a939d70180/volumes" Jan 30 08:31:16 crc kubenswrapper[4870]: I0130 08:31:16.660774 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 08:31:17 crc kubenswrapper[4870]: I0130 08:31:17.367503 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerStarted","Data":"919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2"} Jan 30 08:31:17 crc kubenswrapper[4870]: I0130 08:31:17.367890 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerStarted","Data":"718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531"} Jan 30 08:31:18 crc kubenswrapper[4870]: I0130 08:31:18.379194 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerStarted","Data":"75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5"} Jan 30 08:31:20 crc kubenswrapper[4870]: I0130 08:31:20.404671 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerStarted","Data":"74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a"} Jan 30 08:31:20 crc kubenswrapper[4870]: I0130 08:31:20.406420 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:31:20 crc kubenswrapper[4870]: I0130 08:31:20.443230 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.336718606 podStartE2EDuration="6.44321049s" podCreationTimestamp="2026-01-30 08:31:14 +0000 UTC" firstStartedPulling="2026-01-30 08:31:15.276653965 +0000 UTC m=+1313.972201064" lastFinishedPulling="2026-01-30 08:31:19.383145849 +0000 UTC m=+1318.078692948" observedRunningTime="2026-01-30 08:31:20.429981712 +0000 UTC m=+1319.125528831" watchObservedRunningTime="2026-01-30 08:31:20.44321049 +0000 UTC m=+1319.138757589" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.426864 4870 generic.go:334] "Generic (PLEG): container finished" podID="21f6c18c-fcc7-4bd5-9a86-81dacd111e90" containerID="cf022953959b6108a335c22e59a92909beccc351b6ace66848278caae812affb" exitCode=137 Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.426934 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21f6c18c-fcc7-4bd5-9a86-81dacd111e90","Type":"ContainerDied","Data":"cf022953959b6108a335c22e59a92909beccc351b6ace66848278caae812affb"} Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.427333 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21f6c18c-fcc7-4bd5-9a86-81dacd111e90","Type":"ContainerDied","Data":"ad4cc5b31839637e7fda99e5e00f775d4b4a8266d3fec39e6cf700bdb0b11a6c"} Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.427372 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4cc5b31839637e7fda99e5e00f775d4b4a8266d3fec39e6cf700bdb0b11a6c" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.530674 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.643699 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-config-data\") pod \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.644168 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdt9v\" (UniqueName: \"kubernetes.io/projected/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-kube-api-access-sdt9v\") pod \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.644231 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-combined-ca-bundle\") pod \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\" (UID: \"21f6c18c-fcc7-4bd5-9a86-81dacd111e90\") " Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.650025 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-kube-api-access-sdt9v" (OuterVolumeSpecName: "kube-api-access-sdt9v") pod "21f6c18c-fcc7-4bd5-9a86-81dacd111e90" (UID: "21f6c18c-fcc7-4bd5-9a86-81dacd111e90"). InnerVolumeSpecName "kube-api-access-sdt9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.674288 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21f6c18c-fcc7-4bd5-9a86-81dacd111e90" (UID: "21f6c18c-fcc7-4bd5-9a86-81dacd111e90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.684527 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-config-data" (OuterVolumeSpecName: "config-data") pod "21f6c18c-fcc7-4bd5-9a86-81dacd111e90" (UID: "21f6c18c-fcc7-4bd5-9a86-81dacd111e90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.746518 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdt9v\" (UniqueName: \"kubernetes.io/projected/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-kube-api-access-sdt9v\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.746550 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:22 crc kubenswrapper[4870]: I0130 08:31:22.746563 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f6c18c-fcc7-4bd5-9a86-81dacd111e90-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.437252 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.471507 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.481038 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.504414 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:31:23 crc kubenswrapper[4870]: E0130 08:31:23.504997 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f6c18c-fcc7-4bd5-9a86-81dacd111e90" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.505022 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f6c18c-fcc7-4bd5-9a86-81dacd111e90" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.505295 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f6c18c-fcc7-4bd5-9a86-81dacd111e90" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.506259 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.510108 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.513261 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.515521 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.517136 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.667949 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.668406 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfglr\" (UniqueName: \"kubernetes.io/projected/f6319a2a-594b-4da1-be42-ad0918221515-kube-api-access-nfglr\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.668604 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.668641 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.668723 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.770921 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.771223 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.771383 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.771605 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.771737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfglr\" (UniqueName: \"kubernetes.io/projected/f6319a2a-594b-4da1-be42-ad0918221515-kube-api-access-nfglr\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.776184 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.776631 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.777435 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.777808 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6319a2a-594b-4da1-be42-ad0918221515-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.788843 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfglr\" (UniqueName: \"kubernetes.io/projected/f6319a2a-594b-4da1-be42-ad0918221515-kube-api-access-nfglr\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6319a2a-594b-4da1-be42-ad0918221515\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:23 crc kubenswrapper[4870]: I0130 08:31:23.835387 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.095831 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f6c18c-fcc7-4bd5-9a86-81dacd111e90" path="/var/lib/kubelet/pods/21f6c18c-fcc7-4bd5-9a86-81dacd111e90/volumes" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.327115 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:31:24 crc kubenswrapper[4870]: W0130 08:31:24.379495 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6319a2a_594b_4da1_be42_ad0918221515.slice/crio-3b18cc7676d8de144f48364305f73360d994ac368fd4a0ebf944668d0f78010d WatchSource:0}: Error finding container 3b18cc7676d8de144f48364305f73360d994ac368fd4a0ebf944668d0f78010d: Status 404 returned error can't find the container with id 3b18cc7676d8de144f48364305f73360d994ac368fd4a0ebf944668d0f78010d Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.460698 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6319a2a-594b-4da1-be42-ad0918221515","Type":"ContainerStarted","Data":"3b18cc7676d8de144f48364305f73360d994ac368fd4a0ebf944668d0f78010d"} Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.462917 4870 generic.go:334] "Generic (PLEG): container finished" podID="e7c987d2-eb6f-4ad7-a6b3-97181526dc24" containerID="eeeda60427741389f29ce7682e68db91dead5b11853ceb5b520159a71514e942" exitCode=137 Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.462915 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c987d2-eb6f-4ad7-a6b3-97181526dc24","Type":"ContainerDied","Data":"eeeda60427741389f29ce7682e68db91dead5b11853ceb5b520159a71514e942"} Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.727318 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.893332 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-config-data\") pod \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.893422 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85spt\" (UniqueName: \"kubernetes.io/projected/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-kube-api-access-85spt\") pod \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.893528 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-combined-ca-bundle\") pod \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\" (UID: \"e7c987d2-eb6f-4ad7-a6b3-97181526dc24\") " Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.898045 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-kube-api-access-85spt" (OuterVolumeSpecName: "kube-api-access-85spt") pod "e7c987d2-eb6f-4ad7-a6b3-97181526dc24" (UID: "e7c987d2-eb6f-4ad7-a6b3-97181526dc24"). InnerVolumeSpecName "kube-api-access-85spt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.919606 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-config-data" (OuterVolumeSpecName: "config-data") pod "e7c987d2-eb6f-4ad7-a6b3-97181526dc24" (UID: "e7c987d2-eb6f-4ad7-a6b3-97181526dc24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.920097 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7c987d2-eb6f-4ad7-a6b3-97181526dc24" (UID: "e7c987d2-eb6f-4ad7-a6b3-97181526dc24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.996075 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85spt\" (UniqueName: \"kubernetes.io/projected/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-kube-api-access-85spt\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.996121 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:24 crc kubenswrapper[4870]: I0130 08:31:24.996134 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c987d2-eb6f-4ad7-a6b3-97181526dc24-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.250477 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.250917 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.478978 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6319a2a-594b-4da1-be42-ad0918221515","Type":"ContainerStarted","Data":"985c9a0893bc80da949bc398f770276f79a8adc7c1e7d4dab27df98aa10edf8b"} Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.485226 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c987d2-eb6f-4ad7-a6b3-97181526dc24","Type":"ContainerDied","Data":"4844fa8b96732628baf7088ac1c015a802eaab1095cb6ae769aae1a9f257db48"} Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.485280 4870 scope.go:117] "RemoveContainer" containerID="eeeda60427741389f29ce7682e68db91dead5b11853ceb5b520159a71514e942" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.485290 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.512960 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.51293849 podStartE2EDuration="2.51293849s" podCreationTimestamp="2026-01-30 08:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.511949799 +0000 UTC m=+1324.207496918" watchObservedRunningTime="2026-01-30 08:31:25.51293849 +0000 UTC m=+1324.208485609" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.551536 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.568987 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.583273 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:25 crc kubenswrapper[4870]: E0130 08:31:25.583854 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c987d2-eb6f-4ad7-a6b3-97181526dc24" containerName="nova-scheduler-scheduler" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.583901 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c987d2-eb6f-4ad7-a6b3-97181526dc24" containerName="nova-scheduler-scheduler" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.584187 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c987d2-eb6f-4ad7-a6b3-97181526dc24" containerName="nova-scheduler-scheduler" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.585092 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.587902 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.596558 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.715407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzwt\" (UniqueName: \"kubernetes.io/projected/7769eb04-0ff3-41ef-9977-e66563ea4085-kube-api-access-4nzwt\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.715755 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-config-data\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.715876 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: E0130 08:31:25.763189 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c987d2_eb6f_4ad7_a6b3_97181526dc24.slice/crio-4844fa8b96732628baf7088ac1c015a802eaab1095cb6ae769aae1a9f257db48\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c987d2_eb6f_4ad7_a6b3_97181526dc24.slice\": RecentStats: unable to find data in memory cache]" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.817575 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzwt\" (UniqueName: \"kubernetes.io/projected/7769eb04-0ff3-41ef-9977-e66563ea4085-kube-api-access-4nzwt\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.817661 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-config-data\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.817685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.825357 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-config-data\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.837868 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.848254 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzwt\" (UniqueName: \"kubernetes.io/projected/7769eb04-0ff3-41ef-9977-e66563ea4085-kube-api-access-4nzwt\") pod \"nova-scheduler-0\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:25 crc kubenswrapper[4870]: I0130 08:31:25.913380 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.102653 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c987d2-eb6f-4ad7-a6b3-97181526dc24" path="/var/lib/kubelet/pods/e7c987d2-eb6f-4ad7-a6b3-97181526dc24/volumes" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.143458 4870 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod35497556-3464-49d4-9dc2-8f8153a1db82"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod35497556-3464-49d4-9dc2-8f8153a1db82] : Timed out while waiting for systemd to remove kubepods-besteffort-pod35497556_3464_49d4_9dc2_8f8153a1db82.slice" Jan 30 08:31:26 crc kubenswrapper[4870]: E0130 08:31:26.143519 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod35497556-3464-49d4-9dc2-8f8153a1db82] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod35497556-3464-49d4-9dc2-8f8153a1db82] : Timed out while waiting for systemd to remove kubepods-besteffort-pod35497556_3464_49d4_9dc2_8f8153a1db82.slice" pod="openstack/nova-api-0" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.472586 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.501606 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7769eb04-0ff3-41ef-9977-e66563ea4085","Type":"ContainerStarted","Data":"e5d61cbfee6394ab7139462d11ff3290876e24938531670c58f9dd81e3a55b6c"} Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.501680 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.576588 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.594715 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.618637 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.621205 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.623808 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.636462 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.752231 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.752308 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6784l\" (UniqueName: \"kubernetes.io/projected/a1de7242-d69a-4e86-8461-a771c855adf9-kube-api-access-6784l\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.752418 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-config-data\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.752500 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1de7242-d69a-4e86-8461-a771c855adf9-logs\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.854309 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-config-data\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.854447 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1de7242-d69a-4e86-8461-a771c855adf9-logs\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.854626 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.854693 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6784l\" (UniqueName: \"kubernetes.io/projected/a1de7242-d69a-4e86-8461-a771c855adf9-kube-api-access-6784l\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.856513 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1de7242-d69a-4e86-8461-a771c855adf9-logs\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.860450 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.862717 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-config-data\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.877730 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6784l\" (UniqueName: \"kubernetes.io/projected/a1de7242-d69a-4e86-8461-a771c855adf9-kube-api-access-6784l\") pod \"nova-api-0\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " pod="openstack/nova-api-0" Jan 30 08:31:26 crc kubenswrapper[4870]: I0130 08:31:26.953969 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:27 crc kubenswrapper[4870]: I0130 08:31:27.438905 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:27 crc kubenswrapper[4870]: W0130 08:31:27.453719 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1de7242_d69a_4e86_8461_a771c855adf9.slice/crio-e2b4d49702eb3a18798f8e9ef5ef000c9c2ecb3963af90a645e06a8320f3524e WatchSource:0}: Error finding container e2b4d49702eb3a18798f8e9ef5ef000c9c2ecb3963af90a645e06a8320f3524e: Status 404 returned error can't find the container with id e2b4d49702eb3a18798f8e9ef5ef000c9c2ecb3963af90a645e06a8320f3524e Jan 30 08:31:27 crc kubenswrapper[4870]: I0130 08:31:27.519824 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7769eb04-0ff3-41ef-9977-e66563ea4085","Type":"ContainerStarted","Data":"aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66"} Jan 30 08:31:27 crc kubenswrapper[4870]: I0130 08:31:27.520866 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1de7242-d69a-4e86-8461-a771c855adf9","Type":"ContainerStarted","Data":"e2b4d49702eb3a18798f8e9ef5ef000c9c2ecb3963af90a645e06a8320f3524e"} Jan 30 08:31:27 crc kubenswrapper[4870]: I0130 08:31:27.547598 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.547583745 podStartE2EDuration="2.547583745s" podCreationTimestamp="2026-01-30 08:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.540523023 +0000 UTC m=+1326.236070142" watchObservedRunningTime="2026-01-30 08:31:27.547583745 +0000 UTC m=+1326.243130854" Jan 30 08:31:28 crc kubenswrapper[4870]: I0130 08:31:28.089650 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35497556-3464-49d4-9dc2-8f8153a1db82" path="/var/lib/kubelet/pods/35497556-3464-49d4-9dc2-8f8153a1db82/volumes" Jan 30 08:31:28 crc kubenswrapper[4870]: I0130 08:31:28.537043 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1de7242-d69a-4e86-8461-a771c855adf9","Type":"ContainerStarted","Data":"2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d"} Jan 30 08:31:28 crc kubenswrapper[4870]: I0130 08:31:28.537462 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1de7242-d69a-4e86-8461-a771c855adf9","Type":"ContainerStarted","Data":"91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24"} Jan 30 08:31:28 crc kubenswrapper[4870]: I0130 08:31:28.836126 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:29 crc kubenswrapper[4870]: I0130 08:31:29.581250 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.581227469 podStartE2EDuration="3.581227469s" podCreationTimestamp="2026-01-30 08:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:29.57427793 +0000 UTC m=+1328.269825049" watchObservedRunningTime="2026-01-30 08:31:29.581227469 +0000 UTC m=+1328.276774588" Jan 30 08:31:30 crc kubenswrapper[4870]: I0130 08:31:30.913546 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:31:33 crc kubenswrapper[4870]: I0130 08:31:33.836124 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:33 crc kubenswrapper[4870]: I0130 08:31:33.855395 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.645477 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.854042 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hhwc4"] Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.855675 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.858609 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.858857 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.866492 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hhwc4"] Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.942541 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.942587 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d7x\" (UniqueName: \"kubernetes.io/projected/c1dfb454-58dc-4c83-b25e-cabaab6cb747-kube-api-access-b5d7x\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.942609 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-config-data\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:34 crc kubenswrapper[4870]: I0130 08:31:34.942637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-scripts\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.044566 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.044620 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d7x\" (UniqueName: \"kubernetes.io/projected/c1dfb454-58dc-4c83-b25e-cabaab6cb747-kube-api-access-b5d7x\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.044652 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-config-data\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.044695 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-scripts\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.056082 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-scripts\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.056100 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.057556 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-config-data\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.068609 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d7x\" (UniqueName: \"kubernetes.io/projected/c1dfb454-58dc-4c83-b25e-cabaab6cb747-kube-api-access-b5d7x\") pod \"nova-cell1-cell-mapping-hhwc4\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.182263 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.913597 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:31:35 crc kubenswrapper[4870]: I0130 08:31:35.979899 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.217335 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hhwc4"] Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.633796 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hhwc4" event={"ID":"c1dfb454-58dc-4c83-b25e-cabaab6cb747","Type":"ContainerStarted","Data":"4b78d78b78d21abcc7506de0b24454a50e055736a3c90f711e671ea39c5653ae"} Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.633869 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hhwc4" event={"ID":"c1dfb454-58dc-4c83-b25e-cabaab6cb747","Type":"ContainerStarted","Data":"0018915fa01663a767ed29852c909f16d8cf1a8a4ada56611946c2a2c6b2ea35"} Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.661483 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hhwc4" podStartSLOduration=2.661459423 podStartE2EDuration="2.661459423s" podCreationTimestamp="2026-01-30 08:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:36.651519739 +0000 UTC m=+1335.347066868" watchObservedRunningTime="2026-01-30 08:31:36.661459423 +0000 UTC m=+1335.357006542" Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.684192 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.955047 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:31:36 crc kubenswrapper[4870]: I0130 08:31:36.955115 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:31:37 crc kubenswrapper[4870]: I0130 08:31:37.996289 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:37 crc kubenswrapper[4870]: I0130 08:31:37.996305 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:42 crc kubenswrapper[4870]: I0130 08:31:41.703612 4870 generic.go:334] "Generic (PLEG): container finished" podID="c1dfb454-58dc-4c83-b25e-cabaab6cb747" containerID="4b78d78b78d21abcc7506de0b24454a50e055736a3c90f711e671ea39c5653ae" exitCode=0 Jan 30 08:31:42 crc kubenswrapper[4870]: I0130 08:31:41.703760 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hhwc4" event={"ID":"c1dfb454-58dc-4c83-b25e-cabaab6cb747","Type":"ContainerDied","Data":"4b78d78b78d21abcc7506de0b24454a50e055736a3c90f711e671ea39c5653ae"} Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.167321 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.315074 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-scripts\") pod \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.315256 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5d7x\" (UniqueName: \"kubernetes.io/projected/c1dfb454-58dc-4c83-b25e-cabaab6cb747-kube-api-access-b5d7x\") pod \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.315374 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-combined-ca-bundle\") pod \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.315512 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-config-data\") pod \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\" (UID: \"c1dfb454-58dc-4c83-b25e-cabaab6cb747\") " Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.321616 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1dfb454-58dc-4c83-b25e-cabaab6cb747-kube-api-access-b5d7x" (OuterVolumeSpecName: "kube-api-access-b5d7x") pod "c1dfb454-58dc-4c83-b25e-cabaab6cb747" (UID: "c1dfb454-58dc-4c83-b25e-cabaab6cb747"). InnerVolumeSpecName "kube-api-access-b5d7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.323183 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-scripts" (OuterVolumeSpecName: "scripts") pod "c1dfb454-58dc-4c83-b25e-cabaab6cb747" (UID: "c1dfb454-58dc-4c83-b25e-cabaab6cb747"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.349055 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1dfb454-58dc-4c83-b25e-cabaab6cb747" (UID: "c1dfb454-58dc-4c83-b25e-cabaab6cb747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.370191 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-config-data" (OuterVolumeSpecName: "config-data") pod "c1dfb454-58dc-4c83-b25e-cabaab6cb747" (UID: "c1dfb454-58dc-4c83-b25e-cabaab6cb747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.418421 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5d7x\" (UniqueName: \"kubernetes.io/projected/c1dfb454-58dc-4c83-b25e-cabaab6cb747-kube-api-access-b5d7x\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.418469 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.418485 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.418496 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1dfb454-58dc-4c83-b25e-cabaab6cb747-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.733567 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hhwc4" event={"ID":"c1dfb454-58dc-4c83-b25e-cabaab6cb747","Type":"ContainerDied","Data":"0018915fa01663a767ed29852c909f16d8cf1a8a4ada56611946c2a2c6b2ea35"} Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.734024 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0018915fa01663a767ed29852c909f16d8cf1a8a4ada56611946c2a2c6b2ea35" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.733639 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hhwc4" Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.933383 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.933592 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7769eb04-0ff3-41ef-9977-e66563ea4085" containerName="nova-scheduler-scheduler" containerID="cri-o://aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" gracePeriod=30 Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.985472 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.985711 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-log" containerID="cri-o://91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24" gracePeriod=30 Jan 30 08:31:43 crc kubenswrapper[4870]: I0130 08:31:43.985818 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-api" containerID="cri-o://2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d" gracePeriod=30 Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.006407 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.006661 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-log" containerID="cri-o://d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161" gracePeriod=30 Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.006774 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-metadata" containerID="cri-o://366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82" gracePeriod=30 Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.742393 4870 generic.go:334] "Generic (PLEG): container finished" podID="a1de7242-d69a-4e86-8461-a771c855adf9" containerID="91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24" exitCode=143 Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.742476 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1de7242-d69a-4e86-8461-a771c855adf9","Type":"ContainerDied","Data":"91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24"} Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.744674 4870 generic.go:334] "Generic (PLEG): container finished" podID="dcdab968-579c-4189-87c5-05bad5469d6c" containerID="d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161" exitCode=143 Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.744694 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dcdab968-579c-4189-87c5-05bad5469d6c","Type":"ContainerDied","Data":"d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161"} Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.773081 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.880069 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:33858->10.217.0.216:8775: read: connection reset by peer" Jan 30 08:31:44 crc kubenswrapper[4870]: I0130 08:31:44.880069 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:33850->10.217.0.216:8775: read: connection reset by peer" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.240992 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.353013 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.360575 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdab968-579c-4189-87c5-05bad5469d6c-logs\") pod \"dcdab968-579c-4189-87c5-05bad5469d6c\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.360696 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-combined-ca-bundle\") pod \"dcdab968-579c-4189-87c5-05bad5469d6c\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.360822 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-nova-metadata-tls-certs\") pod \"dcdab968-579c-4189-87c5-05bad5469d6c\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.360910 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-config-data\") pod \"dcdab968-579c-4189-87c5-05bad5469d6c\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.360938 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd5ql\" (UniqueName: \"kubernetes.io/projected/dcdab968-579c-4189-87c5-05bad5469d6c-kube-api-access-hd5ql\") pod \"dcdab968-579c-4189-87c5-05bad5469d6c\" (UID: \"dcdab968-579c-4189-87c5-05bad5469d6c\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.361770 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcdab968-579c-4189-87c5-05bad5469d6c-logs" (OuterVolumeSpecName: "logs") pod "dcdab968-579c-4189-87c5-05bad5469d6c" (UID: "dcdab968-579c-4189-87c5-05bad5469d6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.367416 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdab968-579c-4189-87c5-05bad5469d6c-kube-api-access-hd5ql" (OuterVolumeSpecName: "kube-api-access-hd5ql") pod "dcdab968-579c-4189-87c5-05bad5469d6c" (UID: "dcdab968-579c-4189-87c5-05bad5469d6c"). InnerVolumeSpecName "kube-api-access-hd5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.414910 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-config-data" (OuterVolumeSpecName: "config-data") pod "dcdab968-579c-4189-87c5-05bad5469d6c" (UID: "dcdab968-579c-4189-87c5-05bad5469d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.436818 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcdab968-579c-4189-87c5-05bad5469d6c" (UID: "dcdab968-579c-4189-87c5-05bad5469d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.439451 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dcdab968-579c-4189-87c5-05bad5469d6c" (UID: "dcdab968-579c-4189-87c5-05bad5469d6c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.463260 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-config-data\") pod \"a1de7242-d69a-4e86-8461-a771c855adf9\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.463702 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1de7242-d69a-4e86-8461-a771c855adf9-logs\") pod \"a1de7242-d69a-4e86-8461-a771c855adf9\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.463820 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6784l\" (UniqueName: \"kubernetes.io/projected/a1de7242-d69a-4e86-8461-a771c855adf9-kube-api-access-6784l\") pod \"a1de7242-d69a-4e86-8461-a771c855adf9\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.463901 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-combined-ca-bundle\") pod \"a1de7242-d69a-4e86-8461-a771c855adf9\" (UID: \"a1de7242-d69a-4e86-8461-a771c855adf9\") " Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464584 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1de7242-d69a-4e86-8461-a771c855adf9-logs" (OuterVolumeSpecName: "logs") pod "a1de7242-d69a-4e86-8461-a771c855adf9" (UID: "a1de7242-d69a-4e86-8461-a771c855adf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464754 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464775 4870 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464788 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcdab968-579c-4189-87c5-05bad5469d6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464797 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd5ql\" (UniqueName: \"kubernetes.io/projected/dcdab968-579c-4189-87c5-05bad5469d6c-kube-api-access-hd5ql\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464807 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcdab968-579c-4189-87c5-05bad5469d6c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.464816 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1de7242-d69a-4e86-8461-a771c855adf9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.467111 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1de7242-d69a-4e86-8461-a771c855adf9-kube-api-access-6784l" (OuterVolumeSpecName: "kube-api-access-6784l") pod "a1de7242-d69a-4e86-8461-a771c855adf9" (UID: "a1de7242-d69a-4e86-8461-a771c855adf9"). InnerVolumeSpecName "kube-api-access-6784l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.492302 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-config-data" (OuterVolumeSpecName: "config-data") pod "a1de7242-d69a-4e86-8461-a771c855adf9" (UID: "a1de7242-d69a-4e86-8461-a771c855adf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.492418 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1de7242-d69a-4e86-8461-a771c855adf9" (UID: "a1de7242-d69a-4e86-8461-a771c855adf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.567200 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.567240 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6784l\" (UniqueName: \"kubernetes.io/projected/a1de7242-d69a-4e86-8461-a771c855adf9-kube-api-access-6784l\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.567250 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1de7242-d69a-4e86-8461-a771c855adf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.756259 4870 generic.go:334] "Generic (PLEG): container finished" podID="a1de7242-d69a-4e86-8461-a771c855adf9" containerID="2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d" exitCode=0 Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.756308 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.756373 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1de7242-d69a-4e86-8461-a771c855adf9","Type":"ContainerDied","Data":"2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d"} Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.756442 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1de7242-d69a-4e86-8461-a771c855adf9","Type":"ContainerDied","Data":"e2b4d49702eb3a18798f8e9ef5ef000c9c2ecb3963af90a645e06a8320f3524e"} Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.756473 4870 scope.go:117] "RemoveContainer" containerID="2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.768963 4870 generic.go:334] "Generic (PLEG): container finished" podID="dcdab968-579c-4189-87c5-05bad5469d6c" containerID="366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82" exitCode=0 Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.768997 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dcdab968-579c-4189-87c5-05bad5469d6c","Type":"ContainerDied","Data":"366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82"} Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.769020 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.769021 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dcdab968-579c-4189-87c5-05bad5469d6c","Type":"ContainerDied","Data":"01a2218e5e8fc92f78dfaa53cf3e950822f8a4a9869c349d33052df56fc52370"} Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.790531 4870 scope.go:117] "RemoveContainer" containerID="91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.795743 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.806501 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.816901 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.819778 4870 scope.go:117] "RemoveContainer" containerID="2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.821006 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d\": container with ID starting with 2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d not found: ID does not exist" containerID="2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.821076 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d"} err="failed to get container status \"2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d\": rpc error: code = NotFound desc = could not find container \"2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d\": container with ID starting with 2975101aa8a5b490aa2e1e0a7bd6ecd8816f9596dafd908075f0463bf0a9804d not found: ID does not exist" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.821125 4870 scope.go:117] "RemoveContainer" containerID="91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.821587 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24\": container with ID starting with 91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24 not found: ID does not exist" containerID="91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.821644 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24"} err="failed to get container status \"91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24\": rpc error: code = NotFound desc = could not find container \"91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24\": container with ID starting with 91208fdae46fcfb2ca1ade5a3e1bc413f85090f1c513fef0d1f92dd39faaad24 not found: ID does not exist" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.821682 4870 scope.go:117] "RemoveContainer" containerID="366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.826553 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.834265 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.834769 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-log" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.834785 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-log" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.834803 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-api" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.834811 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-api" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.834831 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-log" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.834837 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-log" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.834854 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-metadata" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.834864 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-metadata" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.834890 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1dfb454-58dc-4c83-b25e-cabaab6cb747" containerName="nova-manage" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.834897 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1dfb454-58dc-4c83-b25e-cabaab6cb747" containerName="nova-manage" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.835081 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-log" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.835101 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-metadata" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.835148 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" containerName="nova-api-api" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.835160 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" containerName="nova-metadata-log" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.835173 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1dfb454-58dc-4c83-b25e-cabaab6cb747" containerName="nova-manage" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.836245 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.838578 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.843272 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.845297 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.852115 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.852375 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.867847 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.878986 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.879382 4870 scope.go:117] "RemoveContainer" containerID="d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.924294 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.925972 4870 scope.go:117] "RemoveContainer" containerID="366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.926507 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.926668 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82\": container with ID starting with 366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82 not found: ID does not exist" containerID="366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.926705 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82"} err="failed to get container status \"366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82\": rpc error: code = NotFound desc = could not find container \"366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82\": container with ID starting with 366fec4d0fab4df6c701136c8cfd06595e8cf3699db478403889218f9cdeaf82 not found: ID does not exist" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.926730 4870 scope.go:117] "RemoveContainer" containerID="d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.927052 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161\": container with ID starting with d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161 not found: ID does not exist" containerID="d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.927088 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161"} err="failed to get container status \"d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161\": rpc error: code = NotFound desc = could not find container \"d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161\": container with ID starting with d24242e6d0acaec6d8bd3dd1615c1d42924eeb6b050411d36a27e15a6dedb161 not found: ID does not exist" Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.929086 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:31:45 crc kubenswrapper[4870]: E0130 08:31:45.929133 4870 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7769eb04-0ff3-41ef-9977-e66563ea4085" containerName="nova-scheduler-scheduler" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.981853 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdea203-220a-457e-b00f-61b48afc7329-logs\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982022 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-config-data\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982097 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982279 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpzp\" (UniqueName: \"kubernetes.io/projected/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-kube-api-access-tmpzp\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982491 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-config-data\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982812 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982898 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvh5\" (UniqueName: \"kubernetes.io/projected/ccdea203-220a-457e-b00f-61b48afc7329-kube-api-access-gnvh5\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982943 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:45 crc kubenswrapper[4870]: I0130 08:31:45.982996 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-logs\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085351 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-logs\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085390 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdea203-220a-457e-b00f-61b48afc7329-logs\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085436 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-config-data\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085461 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085512 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpzp\" (UniqueName: \"kubernetes.io/projected/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-kube-api-access-tmpzp\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085555 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-config-data\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085643 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnvh5\" (UniqueName: \"kubernetes.io/projected/ccdea203-220a-457e-b00f-61b48afc7329-kube-api-access-gnvh5\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.085663 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.087068 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-logs\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.087070 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1de7242-d69a-4e86-8461-a771c855adf9" path="/var/lib/kubelet/pods/a1de7242-d69a-4e86-8461-a771c855adf9/volumes" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.087300 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdea203-220a-457e-b00f-61b48afc7329-logs\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.087771 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdab968-579c-4189-87c5-05bad5469d6c" path="/var/lib/kubelet/pods/dcdab968-579c-4189-87c5-05bad5469d6c/volumes" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.089652 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-config-data\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.090048 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.090627 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-config-data\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.091934 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.092434 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdea203-220a-457e-b00f-61b48afc7329-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.105429 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnvh5\" (UniqueName: \"kubernetes.io/projected/ccdea203-220a-457e-b00f-61b48afc7329-kube-api-access-gnvh5\") pod \"nova-metadata-0\" (UID: \"ccdea203-220a-457e-b00f-61b48afc7329\") " pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.105779 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpzp\" (UniqueName: \"kubernetes.io/projected/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-kube-api-access-tmpzp\") pod \"nova-api-0\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.167931 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.179919 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.695149 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.756469 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:31:46 crc kubenswrapper[4870]: W0130 08:31:46.759426 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23cc5a83_d937_4f75_8256_e2ea77e8fe0a.slice/crio-6699255e6904997ff78ba3f0df7576bf2779f5c9622dac09638311ffe5a134e0 WatchSource:0}: Error finding container 6699255e6904997ff78ba3f0df7576bf2779f5c9622dac09638311ffe5a134e0: Status 404 returned error can't find the container with id 6699255e6904997ff78ba3f0df7576bf2779f5c9622dac09638311ffe5a134e0 Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.786355 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc5a83-d937-4f75-8256-e2ea77e8fe0a","Type":"ContainerStarted","Data":"6699255e6904997ff78ba3f0df7576bf2779f5c9622dac09638311ffe5a134e0"} Jan 30 08:31:46 crc kubenswrapper[4870]: I0130 08:31:46.789019 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccdea203-220a-457e-b00f-61b48afc7329","Type":"ContainerStarted","Data":"06420277392a8c95a7a0decdb3bfa5fa2b5bd8d8fb05e11deef7fc819e13647c"} Jan 30 08:31:47 crc kubenswrapper[4870]: I0130 08:31:47.801104 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc5a83-d937-4f75-8256-e2ea77e8fe0a","Type":"ContainerStarted","Data":"cb9bff710edf11f4d70de2ac074768c3d03a94425b5a99d8d702b4bc10eb9597"} Jan 30 08:31:47 crc kubenswrapper[4870]: I0130 08:31:47.801456 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc5a83-d937-4f75-8256-e2ea77e8fe0a","Type":"ContainerStarted","Data":"41ef7bc61259edffa4f8c80705a3d4d0d4a5ae2248d6e06295979bab4e16fc83"} Jan 30 08:31:47 crc kubenswrapper[4870]: I0130 08:31:47.804658 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccdea203-220a-457e-b00f-61b48afc7329","Type":"ContainerStarted","Data":"1cc055e105ed441f83fd209b5650e3acb7acad6bae95d0a2dc677548d12ed7ab"} Jan 30 08:31:47 crc kubenswrapper[4870]: I0130 08:31:47.804740 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccdea203-220a-457e-b00f-61b48afc7329","Type":"ContainerStarted","Data":"f2ed744c83bd703945e51ea336f9496ba896a26dc498f4d3970e4b131f486f11"} Jan 30 08:31:47 crc kubenswrapper[4870]: I0130 08:31:47.831936 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.831908677 podStartE2EDuration="2.831908677s" podCreationTimestamp="2026-01-30 08:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:47.830794813 +0000 UTC m=+1346.526341932" watchObservedRunningTime="2026-01-30 08:31:47.831908677 +0000 UTC m=+1346.527455796" Jan 30 08:31:47 crc kubenswrapper[4870]: I0130 08:31:47.863069 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.863045369 podStartE2EDuration="2.863045369s" podCreationTimestamp="2026-01-30 08:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:47.857960629 +0000 UTC m=+1346.553507768" watchObservedRunningTime="2026-01-30 08:31:47.863045369 +0000 UTC m=+1346.558592488" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.281182 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.356101 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-config-data\") pod \"7769eb04-0ff3-41ef-9977-e66563ea4085\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.356157 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nzwt\" (UniqueName: \"kubernetes.io/projected/7769eb04-0ff3-41ef-9977-e66563ea4085-kube-api-access-4nzwt\") pod \"7769eb04-0ff3-41ef-9977-e66563ea4085\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.356289 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-combined-ca-bundle\") pod \"7769eb04-0ff3-41ef-9977-e66563ea4085\" (UID: \"7769eb04-0ff3-41ef-9977-e66563ea4085\") " Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.360771 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7769eb04-0ff3-41ef-9977-e66563ea4085-kube-api-access-4nzwt" (OuterVolumeSpecName: "kube-api-access-4nzwt") pod "7769eb04-0ff3-41ef-9977-e66563ea4085" (UID: "7769eb04-0ff3-41ef-9977-e66563ea4085"). InnerVolumeSpecName "kube-api-access-4nzwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.382470 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7769eb04-0ff3-41ef-9977-e66563ea4085" (UID: "7769eb04-0ff3-41ef-9977-e66563ea4085"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.395532 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-config-data" (OuterVolumeSpecName: "config-data") pod "7769eb04-0ff3-41ef-9977-e66563ea4085" (UID: "7769eb04-0ff3-41ef-9977-e66563ea4085"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.460318 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.460363 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nzwt\" (UniqueName: \"kubernetes.io/projected/7769eb04-0ff3-41ef-9977-e66563ea4085-kube-api-access-4nzwt\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.460378 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7769eb04-0ff3-41ef-9977-e66563ea4085-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.895033 4870 generic.go:334] "Generic (PLEG): container finished" podID="7769eb04-0ff3-41ef-9977-e66563ea4085" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" exitCode=0 Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.895084 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7769eb04-0ff3-41ef-9977-e66563ea4085","Type":"ContainerDied","Data":"aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66"} Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.895110 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7769eb04-0ff3-41ef-9977-e66563ea4085","Type":"ContainerDied","Data":"e5d61cbfee6394ab7139462d11ff3290876e24938531670c58f9dd81e3a55b6c"} Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.895127 4870 scope.go:117] "RemoveContainer" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.895140 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.929330 4870 scope.go:117] "RemoveContainer" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" Jan 30 08:31:49 crc kubenswrapper[4870]: E0130 08:31:49.929764 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66\": container with ID starting with aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66 not found: ID does not exist" containerID="aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.929792 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66"} err="failed to get container status \"aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66\": rpc error: code = NotFound desc = could not find container \"aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66\": container with ID starting with aae79fe40acd6f2726aec10f649836526fb8e19a3f4e4e131c09bcefac9d4c66 not found: ID does not exist" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.945603 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.961790 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.975700 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:49 crc kubenswrapper[4870]: E0130 08:31:49.976114 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7769eb04-0ff3-41ef-9977-e66563ea4085" containerName="nova-scheduler-scheduler" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.976125 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7769eb04-0ff3-41ef-9977-e66563ea4085" containerName="nova-scheduler-scheduler" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.976298 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7769eb04-0ff3-41ef-9977-e66563ea4085" containerName="nova-scheduler-scheduler" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.976832 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.976913 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:49 crc kubenswrapper[4870]: I0130 08:31:49.988656 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.086279 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7769eb04-0ff3-41ef-9977-e66563ea4085" path="/var/lib/kubelet/pods/7769eb04-0ff3-41ef-9977-e66563ea4085/volumes" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.087655 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.087731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jfc\" (UniqueName: \"kubernetes.io/projected/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-kube-api-access-68jfc\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.087892 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-config-data\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.190029 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.190247 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68jfc\" (UniqueName: \"kubernetes.io/projected/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-kube-api-access-68jfc\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.190459 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-config-data\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.196782 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.199303 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-config-data\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.211992 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68jfc\" (UniqueName: \"kubernetes.io/projected/ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6-kube-api-access-68jfc\") pod \"nova-scheduler-0\" (UID: \"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6\") " pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.316317 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.865011 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:31:50 crc kubenswrapper[4870]: I0130 08:31:50.908791 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6","Type":"ContainerStarted","Data":"658a5390562c6f35ef0f73da7eb87bdc477c4b9177074bae8403787f752b2ffd"} Jan 30 08:31:51 crc kubenswrapper[4870]: I0130 08:31:51.180315 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:31:51 crc kubenswrapper[4870]: I0130 08:31:51.180634 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:31:51 crc kubenswrapper[4870]: I0130 08:31:51.922787 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6","Type":"ContainerStarted","Data":"a0d9e18e9847c78a60b7232ac124e6b11ea0da185c6df5064334480e32604d14"} Jan 30 08:31:51 crc kubenswrapper[4870]: I0130 08:31:51.958697 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9586743909999997 podStartE2EDuration="2.958674391s" podCreationTimestamp="2026-01-30 08:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:51.942750239 +0000 UTC m=+1350.638297368" watchObservedRunningTime="2026-01-30 08:31:51.958674391 +0000 UTC m=+1350.654221510" Jan 30 08:31:55 crc kubenswrapper[4870]: I0130 08:31:55.250379 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:31:55 crc kubenswrapper[4870]: I0130 08:31:55.250796 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:31:55 crc kubenswrapper[4870]: I0130 08:31:55.317565 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:31:56 crc kubenswrapper[4870]: I0130 08:31:56.168655 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:31:56 crc kubenswrapper[4870]: I0130 08:31:56.168736 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:31:56 crc kubenswrapper[4870]: I0130 08:31:56.180773 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:31:56 crc kubenswrapper[4870]: I0130 08:31:56.180850 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:31:57 crc kubenswrapper[4870]: I0130 08:31:57.251215 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:57 crc kubenswrapper[4870]: I0130 08:31:57.267179 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:57 crc kubenswrapper[4870]: I0130 08:31:57.267235 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ccdea203-220a-457e-b00f-61b48afc7329" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:31:57 crc kubenswrapper[4870]: I0130 08:31:57.267626 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ccdea203-220a-457e-b00f-61b48afc7329" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:00 crc kubenswrapper[4870]: I0130 08:32:00.316745 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:32:00 crc kubenswrapper[4870]: I0130 08:32:00.373211 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:32:01 crc kubenswrapper[4870]: I0130 08:32:01.085002 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.179982 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.180822 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.181859 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.181929 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.190970 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.193358 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.194071 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.196358 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.203385 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.461625 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6999845677-vd26g"] Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.465185 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.476360 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6999845677-vd26g"] Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.567055 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-nb\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.567204 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-svc\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.567331 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-swift-storage-0\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.567581 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2gv2\" (UniqueName: \"kubernetes.io/projected/8a0f9be1-926a-4340-9f05-ba673e3e471e-kube-api-access-c2gv2\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.567633 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-config\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.567776 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-sb\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.670094 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-nb\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.670153 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-svc\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.670188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-swift-storage-0\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.670242 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2gv2\" (UniqueName: \"kubernetes.io/projected/8a0f9be1-926a-4340-9f05-ba673e3e471e-kube-api-access-c2gv2\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.670260 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-config\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.670298 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-sb\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.671114 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-sb\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.671216 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-nb\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.671456 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-svc\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.671492 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-swift-storage-0\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.671542 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-config\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.695394 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2gv2\" (UniqueName: \"kubernetes.io/projected/8a0f9be1-926a-4340-9f05-ba673e3e471e-kube-api-access-c2gv2\") pod \"dnsmasq-dns-6999845677-vd26g\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:06 crc kubenswrapper[4870]: I0130 08:32:06.790992 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:07 crc kubenswrapper[4870]: I0130 08:32:07.117679 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:32:07 crc kubenswrapper[4870]: I0130 08:32:07.283019 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6999845677-vd26g"] Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.123970 4870 generic.go:334] "Generic (PLEG): container finished" podID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerID="cf4692acee92608a7992da7d8327f9e59bf6302ddd00cf4b1c51b56d002d56e2" exitCode=0 Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.124058 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-vd26g" event={"ID":"8a0f9be1-926a-4340-9f05-ba673e3e471e","Type":"ContainerDied","Data":"cf4692acee92608a7992da7d8327f9e59bf6302ddd00cf4b1c51b56d002d56e2"} Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.124515 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-vd26g" event={"ID":"8a0f9be1-926a-4340-9f05-ba673e3e471e","Type":"ContainerStarted","Data":"b68353bb62e0c010f98866001c94c9fbd25b787dc4392f6cd92563052dd236dc"} Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.786512 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.799304 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.799581 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-central-agent" containerID="cri-o://718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531" gracePeriod=30 Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.799648 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="proxy-httpd" containerID="cri-o://74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a" gracePeriod=30 Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.799710 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="sg-core" containerID="cri-o://75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5" gracePeriod=30 Jan 30 08:32:08 crc kubenswrapper[4870]: I0130 08:32:08.799982 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-notification-agent" containerID="cri-o://919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2" gracePeriod=30 Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.139818 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-vd26g" event={"ID":"8a0f9be1-926a-4340-9f05-ba673e3e471e","Type":"ContainerStarted","Data":"478f243e0d74f6dbf93b850491c64bcf3ea2a501bedee4e704be06e6e754b799"} Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.140851 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.147753 4870 generic.go:334] "Generic (PLEG): container finished" podID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerID="74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a" exitCode=0 Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.147792 4870 generic.go:334] "Generic (PLEG): container finished" podID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerID="75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5" exitCode=2 Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.147829 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerDied","Data":"74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a"} Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.147872 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerDied","Data":"75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5"} Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.148088 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-log" containerID="cri-o://41ef7bc61259edffa4f8c80705a3d4d0d4a5ae2248d6e06295979bab4e16fc83" gracePeriod=30 Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.148107 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-api" containerID="cri-o://cb9bff710edf11f4d70de2ac074768c3d03a94425b5a99d8d702b4bc10eb9597" gracePeriod=30 Jan 30 08:32:09 crc kubenswrapper[4870]: I0130 08:32:09.165331 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6999845677-vd26g" podStartSLOduration=3.165313673 podStartE2EDuration="3.165313673s" podCreationTimestamp="2026-01-30 08:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:09.160424019 +0000 UTC m=+1367.855971138" watchObservedRunningTime="2026-01-30 08:32:09.165313673 +0000 UTC m=+1367.860860782" Jan 30 08:32:10 crc kubenswrapper[4870]: I0130 08:32:10.158542 4870 generic.go:334] "Generic (PLEG): container finished" podID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerID="718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531" exitCode=0 Jan 30 08:32:10 crc kubenswrapper[4870]: I0130 08:32:10.158626 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerDied","Data":"718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531"} Jan 30 08:32:10 crc kubenswrapper[4870]: I0130 08:32:10.160124 4870 generic.go:334] "Generic (PLEG): container finished" podID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerID="41ef7bc61259edffa4f8c80705a3d4d0d4a5ae2248d6e06295979bab4e16fc83" exitCode=143 Jan 30 08:32:10 crc kubenswrapper[4870]: I0130 08:32:10.160205 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc5a83-d937-4f75-8256-e2ea77e8fe0a","Type":"ContainerDied","Data":"41ef7bc61259edffa4f8c80705a3d4d0d4a5ae2248d6e06295979bab4e16fc83"} Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.178442 4870 generic.go:334] "Generic (PLEG): container finished" podID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerID="cb9bff710edf11f4d70de2ac074768c3d03a94425b5a99d8d702b4bc10eb9597" exitCode=0 Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.179584 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc5a83-d937-4f75-8256-e2ea77e8fe0a","Type":"ContainerDied","Data":"cb9bff710edf11f4d70de2ac074768c3d03a94425b5a99d8d702b4bc10eb9597"} Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.360401 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.481262 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-logs" (OuterVolumeSpecName: "logs") pod "23cc5a83-d937-4f75-8256-e2ea77e8fe0a" (UID: "23cc5a83-d937-4f75-8256-e2ea77e8fe0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.481371 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-logs\") pod \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.481550 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-config-data\") pod \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.481592 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-combined-ca-bundle\") pod \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.481632 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmpzp\" (UniqueName: \"kubernetes.io/projected/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-kube-api-access-tmpzp\") pod \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\" (UID: \"23cc5a83-d937-4f75-8256-e2ea77e8fe0a\") " Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.482151 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.487487 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-kube-api-access-tmpzp" (OuterVolumeSpecName: "kube-api-access-tmpzp") pod "23cc5a83-d937-4f75-8256-e2ea77e8fe0a" (UID: "23cc5a83-d937-4f75-8256-e2ea77e8fe0a"). InnerVolumeSpecName "kube-api-access-tmpzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.513801 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-config-data" (OuterVolumeSpecName: "config-data") pod "23cc5a83-d937-4f75-8256-e2ea77e8fe0a" (UID: "23cc5a83-d937-4f75-8256-e2ea77e8fe0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.519306 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23cc5a83-d937-4f75-8256-e2ea77e8fe0a" (UID: "23cc5a83-d937-4f75-8256-e2ea77e8fe0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.584302 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.584341 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:11 crc kubenswrapper[4870]: I0130 08:32:11.584356 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmpzp\" (UniqueName: \"kubernetes.io/projected/23cc5a83-d937-4f75-8256-e2ea77e8fe0a-kube-api-access-tmpzp\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.189915 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc5a83-d937-4f75-8256-e2ea77e8fe0a","Type":"ContainerDied","Data":"6699255e6904997ff78ba3f0df7576bf2779f5c9622dac09638311ffe5a134e0"} Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.189971 4870 scope.go:117] "RemoveContainer" containerID="cb9bff710edf11f4d70de2ac074768c3d03a94425b5a99d8d702b4bc10eb9597" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.190119 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.214823 4870 scope.go:117] "RemoveContainer" containerID="41ef7bc61259edffa4f8c80705a3d4d0d4a5ae2248d6e06295979bab4e16fc83" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.225792 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.248081 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.261666 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:32:12 crc kubenswrapper[4870]: E0130 08:32:12.262272 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-log" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.262301 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-log" Jan 30 08:32:12 crc kubenswrapper[4870]: E0130 08:32:12.262335 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-api" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.262344 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-api" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.262600 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-log" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.262630 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" containerName="nova-api-api" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.263855 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.265477 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.267331 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.267596 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.281349 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.401783 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25g4s\" (UniqueName: \"kubernetes.io/projected/ed40aa22-a330-46ab-9971-39e764e63ff7-kube-api-access-25g4s\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.402112 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40aa22-a330-46ab-9971-39e764e63ff7-logs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.402188 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-config-data\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.402284 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-public-tls-certs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.402327 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.402555 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.504727 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-config-data\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.504800 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-public-tls-certs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.504823 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.505208 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.505276 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g4s\" (UniqueName: \"kubernetes.io/projected/ed40aa22-a330-46ab-9971-39e764e63ff7-kube-api-access-25g4s\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.505358 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40aa22-a330-46ab-9971-39e764e63ff7-logs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.507294 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40aa22-a330-46ab-9971-39e764e63ff7-logs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.512311 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-config-data\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.512695 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.522102 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.525397 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed40aa22-a330-46ab-9971-39e764e63ff7-public-tls-certs\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.531725 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g4s\" (UniqueName: \"kubernetes.io/projected/ed40aa22-a330-46ab-9971-39e764e63ff7-kube-api-access-25g4s\") pod \"nova-api-0\" (UID: \"ed40aa22-a330-46ab-9971-39e764e63ff7\") " pod="openstack/nova-api-0" Jan 30 08:32:12 crc kubenswrapper[4870]: I0130 08:32:12.583675 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:32:13 crc kubenswrapper[4870]: I0130 08:32:13.112356 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:32:13 crc kubenswrapper[4870]: I0130 08:32:13.248157 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed40aa22-a330-46ab-9971-39e764e63ff7","Type":"ContainerStarted","Data":"2a86347911ba5fb18ae9df61916a53aec7e980b31b61f671dff024f66f3d7263"} Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.087694 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23cc5a83-d937-4f75-8256-e2ea77e8fe0a" path="/var/lib/kubelet/pods/23cc5a83-d937-4f75-8256-e2ea77e8fe0a/volumes" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.194137 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.243627 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnc7j\" (UniqueName: \"kubernetes.io/projected/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-kube-api-access-hnc7j\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.243677 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-sg-core-conf-yaml\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.243761 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-run-httpd\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.243833 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-log-httpd\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.243900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-scripts\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.243959 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-ceilometer-tls-certs\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.244006 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-combined-ca-bundle\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.244206 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-config-data\") pod \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\" (UID: \"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27\") " Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.244323 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.244659 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.244740 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.250640 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-kube-api-access-hnc7j" (OuterVolumeSpecName: "kube-api-access-hnc7j") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "kube-api-access-hnc7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.258424 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-scripts" (OuterVolumeSpecName: "scripts") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.273288 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed40aa22-a330-46ab-9971-39e764e63ff7","Type":"ContainerStarted","Data":"9d25fed9d6ec793acd29075f40a1362f97f3f12efc8a4fd33062711db8a9bb39"} Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.276143 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed40aa22-a330-46ab-9971-39e764e63ff7","Type":"ContainerStarted","Data":"a2fe1d66955d487443e012d436d6f7b8bfa1169a936e81749a67c465204fa22a"} Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.288632 4870 generic.go:334] "Generic (PLEG): container finished" podID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerID="919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2" exitCode=0 Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.288937 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerDied","Data":"919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2"} Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.289045 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5bff0c-1f97-4c2d-9f95-46c7c3799d27","Type":"ContainerDied","Data":"878d9967ec40148177071222b9ab0dc547347647130442c6f0bf2d56dd31e4d4"} Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.289133 4870 scope.go:117] "RemoveContainer" containerID="74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.289375 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.290197 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.307049 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.307027495 podStartE2EDuration="2.307027495s" podCreationTimestamp="2026-01-30 08:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:14.299737244 +0000 UTC m=+1372.995284353" watchObservedRunningTime="2026-01-30 08:32:14.307027495 +0000 UTC m=+1373.002574614" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.331206 4870 scope.go:117] "RemoveContainer" containerID="75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.336136 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.347460 4870 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.347491 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.347503 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnc7j\" (UniqueName: \"kubernetes.io/projected/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-kube-api-access-hnc7j\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.347513 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.347523 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.348667 4870 scope.go:117] "RemoveContainer" containerID="919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.367788 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.373056 4870 scope.go:117] "RemoveContainer" containerID="718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.393196 4870 scope.go:117] "RemoveContainer" containerID="74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.393706 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a\": container with ID starting with 74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a not found: ID does not exist" containerID="74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.393749 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a"} err="failed to get container status \"74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a\": rpc error: code = NotFound desc = could not find container \"74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a\": container with ID starting with 74609d48db351a4551e4a14571d8ceeb785d76a089cbc5d4cef7533f25e42b9a not found: ID does not exist" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.393779 4870 scope.go:117] "RemoveContainer" containerID="75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.394190 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5\": container with ID starting with 75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5 not found: ID does not exist" containerID="75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.394226 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5"} err="failed to get container status \"75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5\": rpc error: code = NotFound desc = could not find container \"75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5\": container with ID starting with 75846033cc04fdf75214f4cc82a37a97af903516f1d459b619c65d4c2c05c4e5 not found: ID does not exist" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.394253 4870 scope.go:117] "RemoveContainer" containerID="919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.394556 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2\": container with ID starting with 919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2 not found: ID does not exist" containerID="919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.394585 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2"} err="failed to get container status \"919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2\": rpc error: code = NotFound desc = could not find container \"919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2\": container with ID starting with 919ec938ff2a2a8f2ab814ba8d607fde425c77054e1f0ed868c053bea67814e2 not found: ID does not exist" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.394608 4870 scope.go:117] "RemoveContainer" containerID="718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.394845 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531\": container with ID starting with 718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531 not found: ID does not exist" containerID="718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.395045 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531"} err="failed to get container status \"718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531\": rpc error: code = NotFound desc = could not find container \"718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531\": container with ID starting with 718104975a12146cc00fc56171ab65a7bdbb1698df506296389cd8521d23c531 not found: ID does not exist" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.407438 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-config-data" (OuterVolumeSpecName: "config-data") pod "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" (UID: "fe5bff0c-1f97-4c2d-9f95-46c7c3799d27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.449888 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.450109 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.626460 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.639310 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.653782 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.654324 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-central-agent" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654346 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-central-agent" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.654364 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="proxy-httpd" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654372 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="proxy-httpd" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.654383 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-notification-agent" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654392 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-notification-agent" Jan 30 08:32:14 crc kubenswrapper[4870]: E0130 08:32:14.654407 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="sg-core" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654414 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="sg-core" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654653 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="sg-core" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654916 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-central-agent" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654949 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="ceilometer-notification-agent" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.654966 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" containerName="proxy-httpd" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.657349 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.661460 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.663015 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.663244 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.673111 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756030 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0944a474-a4a5-4ff7-95cf-cd783c051a16-run-httpd\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756098 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-scripts\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756128 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-config-data\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756166 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0944a474-a4a5-4ff7-95cf-cd783c051a16-log-httpd\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756218 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756246 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxzb\" (UniqueName: \"kubernetes.io/projected/0944a474-a4a5-4ff7-95cf-cd783c051a16-kube-api-access-kdxzb\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.756327 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.858515 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-config-data\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859137 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0944a474-a4a5-4ff7-95cf-cd783c051a16-log-httpd\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859228 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859266 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxzb\" (UniqueName: \"kubernetes.io/projected/0944a474-a4a5-4ff7-95cf-cd783c051a16-kube-api-access-kdxzb\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859343 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859422 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859486 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0944a474-a4a5-4ff7-95cf-cd783c051a16-run-httpd\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.859570 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-scripts\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.860068 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0944a474-a4a5-4ff7-95cf-cd783c051a16-log-httpd\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.860375 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0944a474-a4a5-4ff7-95cf-cd783c051a16-run-httpd\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.866860 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.867155 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.867564 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-config-data\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.869167 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-scripts\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.882437 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0944a474-a4a5-4ff7-95cf-cd783c051a16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.889666 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxzb\" (UniqueName: \"kubernetes.io/projected/0944a474-a4a5-4ff7-95cf-cd783c051a16-kube-api-access-kdxzb\") pod \"ceilometer-0\" (UID: \"0944a474-a4a5-4ff7-95cf-cd783c051a16\") " pod="openstack/ceilometer-0" Jan 30 08:32:14 crc kubenswrapper[4870]: I0130 08:32:14.971572 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:32:15 crc kubenswrapper[4870]: W0130 08:32:15.455331 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0944a474_a4a5_4ff7_95cf_cd783c051a16.slice/crio-750b36c18f29317c6527340cadcc3a4ba0e29ace741d3132fcf943a00dce946a WatchSource:0}: Error finding container 750b36c18f29317c6527340cadcc3a4ba0e29ace741d3132fcf943a00dce946a: Status 404 returned error can't find the container with id 750b36c18f29317c6527340cadcc3a4ba0e29ace741d3132fcf943a00dce946a Jan 30 08:32:15 crc kubenswrapper[4870]: I0130 08:32:15.460159 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:32:16 crc kubenswrapper[4870]: I0130 08:32:16.085703 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5bff0c-1f97-4c2d-9f95-46c7c3799d27" path="/var/lib/kubelet/pods/fe5bff0c-1f97-4c2d-9f95-46c7c3799d27/volumes" Jan 30 08:32:16 crc kubenswrapper[4870]: I0130 08:32:16.309736 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0944a474-a4a5-4ff7-95cf-cd783c051a16","Type":"ContainerStarted","Data":"d4557d1fa9418b5f4939daed4023af55cb89d4096102f6bba8e64b01ea09b0cc"} Jan 30 08:32:16 crc kubenswrapper[4870]: I0130 08:32:16.309776 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0944a474-a4a5-4ff7-95cf-cd783c051a16","Type":"ContainerStarted","Data":"750b36c18f29317c6527340cadcc3a4ba0e29ace741d3132fcf943a00dce946a"} Jan 30 08:32:16 crc kubenswrapper[4870]: I0130 08:32:16.793049 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:32:16 crc kubenswrapper[4870]: I0130 08:32:16.846676 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7777964479-kzgv2"] Jan 30 08:32:16 crc kubenswrapper[4870]: I0130 08:32:16.846921 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7777964479-kzgv2" podUID="d5925267-e75f-4398-af96-6856710c57f3" containerName="dnsmasq-dns" containerID="cri-o://fa09eeaef0e8d067370ba4e9a769247437b75f3bbc783ef72b9b39a713b37db0" gracePeriod=10 Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.326018 4870 generic.go:334] "Generic (PLEG): container finished" podID="d5925267-e75f-4398-af96-6856710c57f3" containerID="fa09eeaef0e8d067370ba4e9a769247437b75f3bbc783ef72b9b39a713b37db0" exitCode=0 Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.326104 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-kzgv2" event={"ID":"d5925267-e75f-4398-af96-6856710c57f3","Type":"ContainerDied","Data":"fa09eeaef0e8d067370ba4e9a769247437b75f3bbc783ef72b9b39a713b37db0"} Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.326386 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7777964479-kzgv2" event={"ID":"d5925267-e75f-4398-af96-6856710c57f3","Type":"ContainerDied","Data":"0d8401900436a6761c400a0be0a0bdd42a9aa8031b291c68c5469cef3ec4cd02"} Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.326401 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d8401900436a6761c400a0be0a0bdd42a9aa8031b291c68c5469cef3ec4cd02" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.328904 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0944a474-a4a5-4ff7-95cf-cd783c051a16","Type":"ContainerStarted","Data":"d46ec846e503baec4f5ee7cc68d33f2537579aee1d790bfccd66ca9508546887"} Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.409277 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.509045 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-swift-storage-0\") pod \"d5925267-e75f-4398-af96-6856710c57f3\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.509392 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-sb\") pod \"d5925267-e75f-4398-af96-6856710c57f3\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.509529 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-config\") pod \"d5925267-e75f-4398-af96-6856710c57f3\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.509563 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br4sd\" (UniqueName: \"kubernetes.io/projected/d5925267-e75f-4398-af96-6856710c57f3-kube-api-access-br4sd\") pod \"d5925267-e75f-4398-af96-6856710c57f3\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.509644 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-svc\") pod \"d5925267-e75f-4398-af96-6856710c57f3\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.509689 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-nb\") pod \"d5925267-e75f-4398-af96-6856710c57f3\" (UID: \"d5925267-e75f-4398-af96-6856710c57f3\") " Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.515996 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5925267-e75f-4398-af96-6856710c57f3-kube-api-access-br4sd" (OuterVolumeSpecName: "kube-api-access-br4sd") pod "d5925267-e75f-4398-af96-6856710c57f3" (UID: "d5925267-e75f-4398-af96-6856710c57f3"). InnerVolumeSpecName "kube-api-access-br4sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.565189 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5925267-e75f-4398-af96-6856710c57f3" (UID: "d5925267-e75f-4398-af96-6856710c57f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.567527 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5925267-e75f-4398-af96-6856710c57f3" (UID: "d5925267-e75f-4398-af96-6856710c57f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.578455 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5925267-e75f-4398-af96-6856710c57f3" (UID: "d5925267-e75f-4398-af96-6856710c57f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.596199 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-config" (OuterVolumeSpecName: "config") pod "d5925267-e75f-4398-af96-6856710c57f3" (UID: "d5925267-e75f-4398-af96-6856710c57f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.604637 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5925267-e75f-4398-af96-6856710c57f3" (UID: "d5925267-e75f-4398-af96-6856710c57f3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.611684 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.611721 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.611735 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.611748 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.611763 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5925267-e75f-4398-af96-6856710c57f3-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:17 crc kubenswrapper[4870]: I0130 08:32:17.611774 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br4sd\" (UniqueName: \"kubernetes.io/projected/d5925267-e75f-4398-af96-6856710c57f3-kube-api-access-br4sd\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:18 crc kubenswrapper[4870]: I0130 08:32:18.343790 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7777964479-kzgv2" Jan 30 08:32:18 crc kubenswrapper[4870]: I0130 08:32:18.343779 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0944a474-a4a5-4ff7-95cf-cd783c051a16","Type":"ContainerStarted","Data":"791a5b366987d43d526b4f900e7e9b4645980fa599dfbf26ae5ef4bb43b58752"} Jan 30 08:32:18 crc kubenswrapper[4870]: I0130 08:32:18.372648 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7777964479-kzgv2"] Jan 30 08:32:18 crc kubenswrapper[4870]: I0130 08:32:18.386507 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7777964479-kzgv2"] Jan 30 08:32:19 crc kubenswrapper[4870]: I0130 08:32:19.363627 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0944a474-a4a5-4ff7-95cf-cd783c051a16","Type":"ContainerStarted","Data":"a316b62987c4d89c420cb63d6dd7a4ee879dba25469888ddee99ccb8f6317ee1"} Jan 30 08:32:19 crc kubenswrapper[4870]: I0130 08:32:19.364090 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:32:19 crc kubenswrapper[4870]: I0130 08:32:19.394759 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7609742800000001 podStartE2EDuration="5.394736781s" podCreationTimestamp="2026-01-30 08:32:14 +0000 UTC" firstStartedPulling="2026-01-30 08:32:15.458451797 +0000 UTC m=+1374.153998916" lastFinishedPulling="2026-01-30 08:32:19.092214308 +0000 UTC m=+1377.787761417" observedRunningTime="2026-01-30 08:32:19.389991792 +0000 UTC m=+1378.085538911" watchObservedRunningTime="2026-01-30 08:32:19.394736781 +0000 UTC m=+1378.090283900" Jan 30 08:32:20 crc kubenswrapper[4870]: I0130 08:32:20.085315 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5925267-e75f-4398-af96-6856710c57f3" path="/var/lib/kubelet/pods/d5925267-e75f-4398-af96-6856710c57f3/volumes" Jan 30 08:32:22 crc kubenswrapper[4870]: I0130 08:32:22.584713 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:32:22 crc kubenswrapper[4870]: I0130 08:32:22.585860 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:32:23 crc kubenswrapper[4870]: I0130 08:32:23.602145 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ed40aa22-a330-46ab-9971-39e764e63ff7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:23 crc kubenswrapper[4870]: I0130 08:32:23.602231 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ed40aa22-a330-46ab-9971-39e764e63ff7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.250095 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.250473 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.250540 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.251482 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fed0dc1b3541c4793a049fc7617c3773e2d05f0ebfb934d3acc5ededede3b844"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.251579 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://fed0dc1b3541c4793a049fc7617c3773e2d05f0ebfb934d3acc5ededede3b844" gracePeriod=600 Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.434742 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="fed0dc1b3541c4793a049fc7617c3773e2d05f0ebfb934d3acc5ededede3b844" exitCode=0 Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.434798 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"fed0dc1b3541c4793a049fc7617c3773e2d05f0ebfb934d3acc5ededede3b844"} Jan 30 08:32:25 crc kubenswrapper[4870]: I0130 08:32:25.434844 4870 scope.go:117] "RemoveContainer" containerID="736e1ea4b0b2b4fa81dc9ec4fa9950e05f221b62734f7e8de7d7969e9158f7ae" Jan 30 08:32:26 crc kubenswrapper[4870]: I0130 08:32:26.452703 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49"} Jan 30 08:32:32 crc kubenswrapper[4870]: I0130 08:32:32.600691 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:32:32 crc kubenswrapper[4870]: I0130 08:32:32.601956 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:32:32 crc kubenswrapper[4870]: I0130 08:32:32.615798 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:32:32 crc kubenswrapper[4870]: I0130 08:32:32.620058 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:32:33 crc kubenswrapper[4870]: I0130 08:32:33.551416 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:32:33 crc kubenswrapper[4870]: I0130 08:32:33.571498 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:32:44 crc kubenswrapper[4870]: I0130 08:32:44.990828 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.484952 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7xmh"] Jan 30 08:32:49 crc kubenswrapper[4870]: E0130 08:32:49.485873 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5925267-e75f-4398-af96-6856710c57f3" containerName="init" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.485901 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5925267-e75f-4398-af96-6856710c57f3" containerName="init" Jan 30 08:32:49 crc kubenswrapper[4870]: E0130 08:32:49.485933 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5925267-e75f-4398-af96-6856710c57f3" containerName="dnsmasq-dns" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.485939 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5925267-e75f-4398-af96-6856710c57f3" containerName="dnsmasq-dns" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.486111 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5925267-e75f-4398-af96-6856710c57f3" containerName="dnsmasq-dns" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.487669 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.513795 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7xmh"] Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.522929 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-catalog-content\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.523097 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-utilities\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.523213 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqzj\" (UniqueName: \"kubernetes.io/projected/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-kube-api-access-4mqzj\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.625514 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-utilities\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.625789 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqzj\" (UniqueName: \"kubernetes.io/projected/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-kube-api-access-4mqzj\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.626079 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-catalog-content\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.626144 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-utilities\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.626337 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-catalog-content\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.644512 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqzj\" (UniqueName: \"kubernetes.io/projected/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-kube-api-access-4mqzj\") pod \"redhat-operators-d7xmh\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:49 crc kubenswrapper[4870]: I0130 08:32:49.806852 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:50 crc kubenswrapper[4870]: I0130 08:32:50.294231 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7xmh"] Jan 30 08:32:50 crc kubenswrapper[4870]: I0130 08:32:50.900837 4870 generic.go:334] "Generic (PLEG): container finished" podID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerID="5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b" exitCode=0 Jan 30 08:32:50 crc kubenswrapper[4870]: I0130 08:32:50.900907 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerDied","Data":"5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b"} Jan 30 08:32:50 crc kubenswrapper[4870]: I0130 08:32:50.900938 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerStarted","Data":"ce7d2fa7b2f490ff295696455cb253c22c42342b96f4aac8e217dc481875d3d8"} Jan 30 08:32:52 crc kubenswrapper[4870]: I0130 08:32:52.934283 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerStarted","Data":"949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff"} Jan 30 08:32:53 crc kubenswrapper[4870]: I0130 08:32:53.946297 4870 generic.go:334] "Generic (PLEG): container finished" podID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerID="949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff" exitCode=0 Jan 30 08:32:53 crc kubenswrapper[4870]: I0130 08:32:53.946397 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerDied","Data":"949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff"} Jan 30 08:32:54 crc kubenswrapper[4870]: I0130 08:32:54.888979 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:32:55 crc kubenswrapper[4870]: I0130 08:32:55.818217 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:32:58 crc kubenswrapper[4870]: I0130 08:32:58.992217 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerStarted","Data":"d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8"} Jan 30 08:32:59 crc kubenswrapper[4870]: I0130 08:32:59.010996 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7xmh" podStartSLOduration=3.042690969 podStartE2EDuration="10.01097556s" podCreationTimestamp="2026-01-30 08:32:49 +0000 UTC" firstStartedPulling="2026-01-30 08:32:50.903332385 +0000 UTC m=+1409.598879504" lastFinishedPulling="2026-01-30 08:32:57.871616986 +0000 UTC m=+1416.567164095" observedRunningTime="2026-01-30 08:32:59.007529911 +0000 UTC m=+1417.703077030" watchObservedRunningTime="2026-01-30 08:32:59.01097556 +0000 UTC m=+1417.706522669" Jan 30 08:32:59 crc kubenswrapper[4870]: I0130 08:32:59.667714 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="rabbitmq" containerID="cri-o://2d90fc261d4a0e6355b34b516c467ce7b3ce867fbf835cf5614291d45b33a700" gracePeriod=604796 Jan 30 08:32:59 crc kubenswrapper[4870]: I0130 08:32:59.770802 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="rabbitmq" containerID="cri-o://121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f" gracePeriod=604797 Jan 30 08:32:59 crc kubenswrapper[4870]: I0130 08:32:59.811252 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:32:59 crc kubenswrapper[4870]: I0130 08:32:59.811303 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:33:00 crc kubenswrapper[4870]: I0130 08:33:00.884228 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7xmh" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="registry-server" probeResult="failure" output=< Jan 30 08:33:00 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:33:00 crc kubenswrapper[4870]: > Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.011031 4870 generic.go:334] "Generic (PLEG): container finished" podID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerID="2d90fc261d4a0e6355b34b516c467ce7b3ce867fbf835cf5614291d45b33a700" exitCode=0 Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.011266 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"97f21b9d-25bf-4a64-94ef-51d83b662ab3","Type":"ContainerDied","Data":"2d90fc261d4a0e6355b34b516c467ce7b3ce867fbf835cf5614291d45b33a700"} Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.389986 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502291 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9pp\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-kube-api-access-mn9pp\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502355 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-tls\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502385 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97f21b9d-25bf-4a64-94ef-51d83b662ab3-pod-info\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502412 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-plugins-conf\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502494 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-config-data\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502536 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-plugins\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502579 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-erlang-cookie\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502624 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-server-conf\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502672 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-confd\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502709 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97f21b9d-25bf-4a64-94ef-51d83b662ab3-erlang-cookie-secret\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.502743 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\" (UID: \"97f21b9d-25bf-4a64-94ef-51d83b662ab3\") " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.503355 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.503649 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.503824 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.512997 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-kube-api-access-mn9pp" (OuterVolumeSpecName: "kube-api-access-mn9pp") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "kube-api-access-mn9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.515093 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.518682 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/97f21b9d-25bf-4a64-94ef-51d83b662ab3-pod-info" (OuterVolumeSpecName: "pod-info") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.526221 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.531777 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f21b9d-25bf-4a64-94ef-51d83b662ab3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.570358 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-config-data" (OuterVolumeSpecName: "config-data") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.605371 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn9pp\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-kube-api-access-mn9pp\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.606273 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.609912 4870 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97f21b9d-25bf-4a64-94ef-51d83b662ab3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.610005 4870 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.610095 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.610188 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.610270 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.610348 4870 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97f21b9d-25bf-4a64-94ef-51d83b662ab3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.610447 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.608839 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-server-conf" (OuterVolumeSpecName: "server-conf") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.645670 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.712382 4870 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97f21b9d-25bf-4a64-94ef-51d83b662ab3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.712746 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.743262 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "97f21b9d-25bf-4a64-94ef-51d83b662ab3" (UID: "97f21b9d-25bf-4a64-94ef-51d83b662ab3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.814733 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97f21b9d-25bf-4a64-94ef-51d83b662ab3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:01 crc kubenswrapper[4870]: I0130 08:33:01.880149 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.019653 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-plugins\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.020838 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-tls\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021009 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-pod-info\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021141 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-erlang-cookie\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021250 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021371 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-server-conf\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021470 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-config-data\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021576 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-confd\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021694 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-plugins-conf\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021847 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-erlang-cookie-secret\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.021973 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4d8r\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-kube-api-access-g4d8r\") pod \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\" (UID: \"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd\") " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.020437 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.028260 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.029032 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.029274 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.036049 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.042048 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046074 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046179 4870 generic.go:334] "Generic (PLEG): container finished" podID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerID="121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f" exitCode=0 Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046277 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd","Type":"ContainerDied","Data":"121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f"} Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046305 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0ee2f8ed-b030-40fc-90fb-32a4e404b1fd","Type":"ContainerDied","Data":"3f0499acc4a6b0c8f2d313af1131c23462a36b6d1d5cfab2eb6312a0f9c1c357"} Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046315 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046700 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-kube-api-access-g4d8r" (OuterVolumeSpecName: "kube-api-access-g4d8r") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "kube-api-access-g4d8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.046323 4870 scope.go:117] "RemoveContainer" containerID="121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.088118 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"97f21b9d-25bf-4a64-94ef-51d83b662ab3","Type":"ContainerDied","Data":"e007871b6d10423ef6514301a7948e0b65aeec9e801d811cb06f4a5040316a29"} Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.088227 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.124634 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-config-data" (OuterVolumeSpecName: "config-data") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125344 4870 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125388 4870 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125402 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4d8r\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-kube-api-access-g4d8r\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125417 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125427 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125436 4870 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125448 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125473 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.125485 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.228678 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.230003 4870 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.240710 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.245528 4870 scope.go:117] "RemoveContainer" containerID="15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.288504 4870 scope.go:117] "RemoveContainer" containerID="121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f" Jan 30 08:33:02 crc kubenswrapper[4870]: E0130 08:33:02.292378 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f\": container with ID starting with 121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f not found: ID does not exist" containerID="121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.292426 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f"} err="failed to get container status \"121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f\": rpc error: code = NotFound desc = could not find container \"121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f\": container with ID starting with 121cd78e5a4355f003ebf17a16fbee0ff6a2d7322239a3f9d8f72539da950f1f not found: ID does not exist" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.292454 4870 scope.go:117] "RemoveContainer" containerID="15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49" Jan 30 08:33:02 crc kubenswrapper[4870]: E0130 08:33:02.293036 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49\": container with ID starting with 15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49 not found: ID does not exist" containerID="15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.293072 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49"} err="failed to get container status \"15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49\": rpc error: code = NotFound desc = could not find container \"15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49\": container with ID starting with 15912521352db9ebeacf4667e3f3fd3607893d70df7becfdb7861d554dec9b49 not found: ID does not exist" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.293110 4870 scope.go:117] "RemoveContainer" containerID="2d90fc261d4a0e6355b34b516c467ce7b3ce867fbf835cf5614291d45b33a700" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.293741 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" (UID: "0ee2f8ed-b030-40fc-90fb-32a4e404b1fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.312743 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.328713 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.329900 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: E0130 08:33:02.330279 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="setup-container" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.330297 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="setup-container" Jan 30 08:33:02 crc kubenswrapper[4870]: E0130 08:33:02.330314 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="setup-container" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.330321 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="setup-container" Jan 30 08:33:02 crc kubenswrapper[4870]: E0130 08:33:02.330339 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="rabbitmq" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.330346 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="rabbitmq" Jan 30 08:33:02 crc kubenswrapper[4870]: E0130 08:33:02.330364 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="rabbitmq" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.330371 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="rabbitmq" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.330575 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" containerName="rabbitmq" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.330593 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" containerName="rabbitmq" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.331540 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.331615 4870 scope.go:117] "RemoveContainer" containerID="55e6a4b3af15640088e3e1927ba88636a5cf35ec532fc2df3395e46ebcf07d79" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.331616 4870 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.331565 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.339617 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.339765 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.340280 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.340403 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.340429 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lwd7k" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.340511 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.340634 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.343786 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.421098 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.430341 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.433803 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.433851 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drr68\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-kube-api-access-drr68\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.433887 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.433923 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.433949 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.433983 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.434018 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.434040 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.434078 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.434101 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.434156 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.442490 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.444661 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.447525 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.447693 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.447806 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.447960 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.448127 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.448310 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.448362 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hr5rb" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.450232 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536060 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkqq\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-kube-api-access-bxkqq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536125 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536269 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536369 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536415 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536521 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536576 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536614 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drr68\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-kube-api-access-drr68\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536638 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536691 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536721 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536748 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536770 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536828 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536896 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536913 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536958 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536979 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.536998 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.537028 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.537067 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.537124 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.537660 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.538162 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.539944 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.540245 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.540313 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.540407 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.540469 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.541612 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.543314 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.545268 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.557561 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drr68\" (UniqueName: \"kubernetes.io/projected/bf05f72e-aa42-4296-a7dc-8b742d6e0aab-kube-api-access-drr68\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.585802 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bf05f72e-aa42-4296-a7dc-8b742d6e0aab\") " pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.638786 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639061 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639160 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639237 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639339 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639470 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639564 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639639 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxkqq\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-kube-api-access-bxkqq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639725 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639891 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.640373 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.641215 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.639823 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.641328 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.641581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.647953 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.647961 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.648430 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.658585 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.662484 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxkqq\" (UniqueName: \"kubernetes.io/projected/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-kube-api-access-bxkqq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.664596 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2575ea2c-dc22-4ca2-bf0b-d67eaa330832-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.680631 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2575ea2c-dc22-4ca2-bf0b-d67eaa330832\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:02 crc kubenswrapper[4870]: I0130 08:33:02.776409 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:03 crc kubenswrapper[4870]: W0130 08:33:03.128494 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf05f72e_aa42_4296_a7dc_8b742d6e0aab.slice/crio-84957b48794c6259279a99608747a29ad7394fc9ca1bd6ce1f98a7f4c10a624e WatchSource:0}: Error finding container 84957b48794c6259279a99608747a29ad7394fc9ca1bd6ce1f98a7f4c10a624e: Status 404 returned error can't find the container with id 84957b48794c6259279a99608747a29ad7394fc9ca1bd6ce1f98a7f4c10a624e Jan 30 08:33:03 crc kubenswrapper[4870]: I0130 08:33:03.131689 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:33:03 crc kubenswrapper[4870]: I0130 08:33:03.325491 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:33:03 crc kubenswrapper[4870]: W0130 08:33:03.327790 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2575ea2c_dc22_4ca2_bf0b_d67eaa330832.slice/crio-a1c0409ce24f2df08527052ff97af7fd0efe806a48a4ecd7876b51529ce24023 WatchSource:0}: Error finding container a1c0409ce24f2df08527052ff97af7fd0efe806a48a4ecd7876b51529ce24023: Status 404 returned error can't find the container with id a1c0409ce24f2df08527052ff97af7fd0efe806a48a4ecd7876b51529ce24023 Jan 30 08:33:04 crc kubenswrapper[4870]: I0130 08:33:04.085060 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee2f8ed-b030-40fc-90fb-32a4e404b1fd" path="/var/lib/kubelet/pods/0ee2f8ed-b030-40fc-90fb-32a4e404b1fd/volumes" Jan 30 08:33:04 crc kubenswrapper[4870]: I0130 08:33:04.085927 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f21b9d-25bf-4a64-94ef-51d83b662ab3" path="/var/lib/kubelet/pods/97f21b9d-25bf-4a64-94ef-51d83b662ab3/volumes" Jan 30 08:33:04 crc kubenswrapper[4870]: I0130 08:33:04.110392 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2575ea2c-dc22-4ca2-bf0b-d67eaa330832","Type":"ContainerStarted","Data":"a1c0409ce24f2df08527052ff97af7fd0efe806a48a4ecd7876b51529ce24023"} Jan 30 08:33:04 crc kubenswrapper[4870]: I0130 08:33:04.111464 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf05f72e-aa42-4296-a7dc-8b742d6e0aab","Type":"ContainerStarted","Data":"84957b48794c6259279a99608747a29ad7394fc9ca1bd6ce1f98a7f4c10a624e"} Jan 30 08:33:05 crc kubenswrapper[4870]: I0130 08:33:05.124279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2575ea2c-dc22-4ca2-bf0b-d67eaa330832","Type":"ContainerStarted","Data":"b54d083e49539914abb80a09d56280e72de1f73b7b8543555a5595e346f4fb9e"} Jan 30 08:33:05 crc kubenswrapper[4870]: I0130 08:33:05.126458 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf05f72e-aa42-4296-a7dc-8b742d6e0aab","Type":"ContainerStarted","Data":"3624472f52ac7be1319516a8ee600eb767c9e0a446d907875f2e5857dd2b649f"} Jan 30 08:33:09 crc kubenswrapper[4870]: I0130 08:33:09.894607 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:33:09 crc kubenswrapper[4870]: I0130 08:33:09.945958 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:33:10 crc kubenswrapper[4870]: I0130 08:33:10.137347 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7xmh"] Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.173844 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7xmh" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="registry-server" containerID="cri-o://d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8" gracePeriod=2 Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.649715 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.828007 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-catalog-content\") pod \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.828416 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-utilities\") pod \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.828546 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mqzj\" (UniqueName: \"kubernetes.io/projected/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-kube-api-access-4mqzj\") pod \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\" (UID: \"aeaabceb-b50c-48b6-b72d-d759f1bda8c1\") " Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.828989 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-utilities" (OuterVolumeSpecName: "utilities") pod "aeaabceb-b50c-48b6-b72d-d759f1bda8c1" (UID: "aeaabceb-b50c-48b6-b72d-d759f1bda8c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.829502 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.838640 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-kube-api-access-4mqzj" (OuterVolumeSpecName: "kube-api-access-4mqzj") pod "aeaabceb-b50c-48b6-b72d-d759f1bda8c1" (UID: "aeaabceb-b50c-48b6-b72d-d759f1bda8c1"). InnerVolumeSpecName "kube-api-access-4mqzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.931332 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mqzj\" (UniqueName: \"kubernetes.io/projected/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-kube-api-access-4mqzj\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:11 crc kubenswrapper[4870]: I0130 08:33:11.945182 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeaabceb-b50c-48b6-b72d-d759f1bda8c1" (UID: "aeaabceb-b50c-48b6-b72d-d759f1bda8c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.033954 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeaabceb-b50c-48b6-b72d-d759f1bda8c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.187859 4870 generic.go:334] "Generic (PLEG): container finished" podID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerID="d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8" exitCode=0 Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.187948 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerDied","Data":"d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8"} Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.187986 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7xmh" event={"ID":"aeaabceb-b50c-48b6-b72d-d759f1bda8c1","Type":"ContainerDied","Data":"ce7d2fa7b2f490ff295696455cb253c22c42342b96f4aac8e217dc481875d3d8"} Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.188013 4870 scope.go:117] "RemoveContainer" containerID="d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.190337 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7xmh" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.213619 4870 scope.go:117] "RemoveContainer" containerID="949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.221256 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7xmh"] Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.234081 4870 scope.go:117] "RemoveContainer" containerID="5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.234403 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7xmh"] Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.293319 4870 scope.go:117] "RemoveContainer" containerID="d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8" Jan 30 08:33:12 crc kubenswrapper[4870]: E0130 08:33:12.293922 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8\": container with ID starting with d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8 not found: ID does not exist" containerID="d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.293974 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8"} err="failed to get container status \"d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8\": rpc error: code = NotFound desc = could not find container \"d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8\": container with ID starting with d95aedd09a6225166721cd6f604fbff2dfd2107f2be52447765e651673cee8d8 not found: ID does not exist" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.294006 4870 scope.go:117] "RemoveContainer" containerID="949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff" Jan 30 08:33:12 crc kubenswrapper[4870]: E0130 08:33:12.294453 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff\": container with ID starting with 949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff not found: ID does not exist" containerID="949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.294664 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff"} err="failed to get container status \"949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff\": rpc error: code = NotFound desc = could not find container \"949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff\": container with ID starting with 949d5f0f4a17100c16a04fa345ac82cf5d135826bf2bae1bad472102789c52ff not found: ID does not exist" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.294821 4870 scope.go:117] "RemoveContainer" containerID="5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b" Jan 30 08:33:12 crc kubenswrapper[4870]: E0130 08:33:12.295380 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b\": container with ID starting with 5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b not found: ID does not exist" containerID="5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b" Jan 30 08:33:12 crc kubenswrapper[4870]: I0130 08:33:12.295424 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b"} err="failed to get container status \"5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b\": rpc error: code = NotFound desc = could not find container \"5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b\": container with ID starting with 5f3286bcb78108dac7517ba9bb6ab1b643045f2decd36211166b9e846ca3555b not found: ID does not exist" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.404506 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8545fb859-qvd2l"] Jan 30 08:33:13 crc kubenswrapper[4870]: E0130 08:33:13.405346 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="extract-utilities" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.405363 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="extract-utilities" Jan 30 08:33:13 crc kubenswrapper[4870]: E0130 08:33:13.405383 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="extract-content" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.405392 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="extract-content" Jan 30 08:33:13 crc kubenswrapper[4870]: E0130 08:33:13.405404 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="registry-server" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.405411 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="registry-server" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.405702 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" containerName="registry-server" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.407344 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.408861 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.417491 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-qvd2l"] Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.562607 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqp7f\" (UniqueName: \"kubernetes.io/projected/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-kube-api-access-xqp7f\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.562646 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-svc\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.562693 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-sb\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.562776 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-config\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.562865 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-nb\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.562946 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-openstack-edpm-ipam\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.563155 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-swift-storage-0\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665220 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-config\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665274 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-nb\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665299 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-openstack-edpm-ipam\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665336 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-swift-storage-0\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665422 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqp7f\" (UniqueName: \"kubernetes.io/projected/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-kube-api-access-xqp7f\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665442 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-svc\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.665483 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-sb\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.666298 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-sb\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.666297 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-swift-storage-0\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.666619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-svc\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.666704 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-openstack-edpm-ipam\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.666997 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-nb\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.667082 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-config\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.684442 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqp7f\" (UniqueName: \"kubernetes.io/projected/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-kube-api-access-xqp7f\") pod \"dnsmasq-dns-8545fb859-qvd2l\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:13 crc kubenswrapper[4870]: I0130 08:33:13.765207 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:14 crc kubenswrapper[4870]: I0130 08:33:14.086640 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeaabceb-b50c-48b6-b72d-d759f1bda8c1" path="/var/lib/kubelet/pods/aeaabceb-b50c-48b6-b72d-d759f1bda8c1/volumes" Jan 30 08:33:14 crc kubenswrapper[4870]: I0130 08:33:14.238356 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-qvd2l"] Jan 30 08:33:15 crc kubenswrapper[4870]: I0130 08:33:15.224257 4870 generic.go:334] "Generic (PLEG): container finished" podID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerID="aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4870]: I0130 08:33:15.224812 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" event={"ID":"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed","Type":"ContainerDied","Data":"aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa"} Jan 30 08:33:15 crc kubenswrapper[4870]: I0130 08:33:15.227715 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" event={"ID":"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed","Type":"ContainerStarted","Data":"7826d9b0a3ff0627f9617f1e48774768eeb63e9d54a8e156f621d77dbe1d82e2"} Jan 30 08:33:16 crc kubenswrapper[4870]: I0130 08:33:16.238017 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" event={"ID":"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed","Type":"ContainerStarted","Data":"48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c"} Jan 30 08:33:16 crc kubenswrapper[4870]: I0130 08:33:16.238413 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:16 crc kubenswrapper[4870]: I0130 08:33:16.272970 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" podStartSLOduration=3.2729508689999998 podStartE2EDuration="3.272950869s" podCreationTimestamp="2026-01-30 08:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:33:16.264288867 +0000 UTC m=+1434.959835986" watchObservedRunningTime="2026-01-30 08:33:16.272950869 +0000 UTC m=+1434.968497978" Jan 30 08:33:23 crc kubenswrapper[4870]: I0130 08:33:23.766053 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:23 crc kubenswrapper[4870]: I0130 08:33:23.866184 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6999845677-vd26g"] Jan 30 08:33:23 crc kubenswrapper[4870]: I0130 08:33:23.866405 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6999845677-vd26g" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerName="dnsmasq-dns" containerID="cri-o://478f243e0d74f6dbf93b850491c64bcf3ea2a501bedee4e704be06e6e754b799" gracePeriod=10 Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.005952 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66968b76ff-bk2j6"] Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.008355 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.014453 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66968b76ff-bk2j6"] Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180647 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-dns-svc\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180695 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-ovsdbserver-nb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180724 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-ovsdbserver-sb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180772 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-dns-swift-storage-0\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180795 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5jb\" (UniqueName: \"kubernetes.io/projected/3f90c906-9b1e-4df6-8b94-367ae01963b7-kube-api-access-2q5jb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180856 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.180975 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-config\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.282707 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-config\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.282914 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-dns-svc\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.282943 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-ovsdbserver-nb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.282969 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-ovsdbserver-sb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.283017 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5jb\" (UniqueName: \"kubernetes.io/projected/3f90c906-9b1e-4df6-8b94-367ae01963b7-kube-api-access-2q5jb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.283038 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-dns-swift-storage-0\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.283087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.284121 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.284174 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-dns-swift-storage-0\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.284220 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-ovsdbserver-nb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.284317 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-ovsdbserver-sb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.284723 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-config\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.287125 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f90c906-9b1e-4df6-8b94-367ae01963b7-dns-svc\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.323464 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5jb\" (UniqueName: \"kubernetes.io/projected/3f90c906-9b1e-4df6-8b94-367ae01963b7-kube-api-access-2q5jb\") pod \"dnsmasq-dns-66968b76ff-bk2j6\" (UID: \"3f90c906-9b1e-4df6-8b94-367ae01963b7\") " pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.334178 4870 generic.go:334] "Generic (PLEG): container finished" podID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerID="478f243e0d74f6dbf93b850491c64bcf3ea2a501bedee4e704be06e6e754b799" exitCode=0 Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.334231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-vd26g" event={"ID":"8a0f9be1-926a-4340-9f05-ba673e3e471e","Type":"ContainerDied","Data":"478f243e0d74f6dbf93b850491c64bcf3ea2a501bedee4e704be06e6e754b799"} Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.368890 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.473112 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.591753 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-swift-storage-0\") pod \"8a0f9be1-926a-4340-9f05-ba673e3e471e\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.599063 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-svc\") pod \"8a0f9be1-926a-4340-9f05-ba673e3e471e\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.599258 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2gv2\" (UniqueName: \"kubernetes.io/projected/8a0f9be1-926a-4340-9f05-ba673e3e471e-kube-api-access-c2gv2\") pod \"8a0f9be1-926a-4340-9f05-ba673e3e471e\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.599320 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-sb\") pod \"8a0f9be1-926a-4340-9f05-ba673e3e471e\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.599354 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-config\") pod \"8a0f9be1-926a-4340-9f05-ba673e3e471e\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.599802 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-nb\") pod \"8a0f9be1-926a-4340-9f05-ba673e3e471e\" (UID: \"8a0f9be1-926a-4340-9f05-ba673e3e471e\") " Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.617226 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0f9be1-926a-4340-9f05-ba673e3e471e-kube-api-access-c2gv2" (OuterVolumeSpecName: "kube-api-access-c2gv2") pod "8a0f9be1-926a-4340-9f05-ba673e3e471e" (UID: "8a0f9be1-926a-4340-9f05-ba673e3e471e"). InnerVolumeSpecName "kube-api-access-c2gv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.669963 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-config" (OuterVolumeSpecName: "config") pod "8a0f9be1-926a-4340-9f05-ba673e3e471e" (UID: "8a0f9be1-926a-4340-9f05-ba673e3e471e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.681229 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8a0f9be1-926a-4340-9f05-ba673e3e471e" (UID: "8a0f9be1-926a-4340-9f05-ba673e3e471e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.682306 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a0f9be1-926a-4340-9f05-ba673e3e471e" (UID: "8a0f9be1-926a-4340-9f05-ba673e3e471e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.685387 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a0f9be1-926a-4340-9f05-ba673e3e471e" (UID: "8a0f9be1-926a-4340-9f05-ba673e3e471e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.703192 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.703933 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.703946 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2gv2\" (UniqueName: \"kubernetes.io/projected/8a0f9be1-926a-4340-9f05-ba673e3e471e-kube-api-access-c2gv2\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.703962 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.703972 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.704093 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a0f9be1-926a-4340-9f05-ba673e3e471e" (UID: "8a0f9be1-926a-4340-9f05-ba673e3e471e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.806777 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a0f9be1-926a-4340-9f05-ba673e3e471e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:24 crc kubenswrapper[4870]: I0130 08:33:24.995435 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66968b76ff-bk2j6"] Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.345841 4870 generic.go:334] "Generic (PLEG): container finished" podID="3f90c906-9b1e-4df6-8b94-367ae01963b7" containerID="0addfc6a8224617c6f63521fe23229a28227d9e93d1629ec2c7dad7e45a59cf9" exitCode=0 Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.345909 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" event={"ID":"3f90c906-9b1e-4df6-8b94-367ae01963b7","Type":"ContainerDied","Data":"0addfc6a8224617c6f63521fe23229a28227d9e93d1629ec2c7dad7e45a59cf9"} Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.346393 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" event={"ID":"3f90c906-9b1e-4df6-8b94-367ae01963b7","Type":"ContainerStarted","Data":"4cf2d7c4e57e6581d6da7b31d6f1b60b35039aa0e3312141c04d27d29fd51192"} Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.349954 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999845677-vd26g" event={"ID":"8a0f9be1-926a-4340-9f05-ba673e3e471e","Type":"ContainerDied","Data":"b68353bb62e0c010f98866001c94c9fbd25b787dc4392f6cd92563052dd236dc"} Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.350002 4870 scope.go:117] "RemoveContainer" containerID="478f243e0d74f6dbf93b850491c64bcf3ea2a501bedee4e704be06e6e754b799" Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.350042 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999845677-vd26g" Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.435863 4870 scope.go:117] "RemoveContainer" containerID="cf4692acee92608a7992da7d8327f9e59bf6302ddd00cf4b1c51b56d002d56e2" Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.450355 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6999845677-vd26g"] Jan 30 08:33:25 crc kubenswrapper[4870]: I0130 08:33:25.460924 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6999845677-vd26g"] Jan 30 08:33:26 crc kubenswrapper[4870]: I0130 08:33:26.088286 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" path="/var/lib/kubelet/pods/8a0f9be1-926a-4340-9f05-ba673e3e471e/volumes" Jan 30 08:33:26 crc kubenswrapper[4870]: I0130 08:33:26.358865 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" event={"ID":"3f90c906-9b1e-4df6-8b94-367ae01963b7","Type":"ContainerStarted","Data":"4b04f926d24d905b140ac17350d4e04e61cb1e7defd63fecb73a38e721dc978f"} Jan 30 08:33:26 crc kubenswrapper[4870]: I0130 08:33:26.359023 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:26 crc kubenswrapper[4870]: I0130 08:33:26.377272 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" podStartSLOduration=3.377254902 podStartE2EDuration="3.377254902s" podCreationTimestamp="2026-01-30 08:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:33:26.373747393 +0000 UTC m=+1445.069294512" watchObservedRunningTime="2026-01-30 08:33:26.377254902 +0000 UTC m=+1445.072802011" Jan 30 08:33:34 crc kubenswrapper[4870]: I0130 08:33:34.371128 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66968b76ff-bk2j6" Jan 30 08:33:34 crc kubenswrapper[4870]: I0130 08:33:34.437023 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-qvd2l"] Jan 30 08:33:34 crc kubenswrapper[4870]: I0130 08:33:34.437901 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerName="dnsmasq-dns" containerID="cri-o://48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c" gracePeriod=10 Jan 30 08:33:34 crc kubenswrapper[4870]: I0130 08:33:34.910131 4870 scope.go:117] "RemoveContainer" containerID="7a7adac6f43dd00107198ca12f07a56a507d2b37982cb644a01747e8eb0b5b52" Jan 30 08:33:34 crc kubenswrapper[4870]: I0130 08:33:34.957130 4870 scope.go:117] "RemoveContainer" containerID="cd0dfeab70fb307cbb6535bdd2b5daa2556dc7c49a1bf88e90112f1cde7b135d" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.049718 4870 scope.go:117] "RemoveContainer" containerID="d52e878ce9dd90e8dba444ebd6a2071ac79735b92b1f1220889d88eefcb18bc4" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.092695 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208639 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-sb\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208719 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-openstack-edpm-ipam\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208825 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-svc\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-swift-storage-0\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208946 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-config\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208963 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqp7f\" (UniqueName: \"kubernetes.io/projected/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-kube-api-access-xqp7f\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.208984 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-nb\") pod \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\" (UID: \"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed\") " Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.213824 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-kube-api-access-xqp7f" (OuterVolumeSpecName: "kube-api-access-xqp7f") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "kube-api-access-xqp7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.261492 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.263744 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.270849 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.271271 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.276491 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.284048 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-config" (OuterVolumeSpecName: "config") pod "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" (UID: "06541376-f5b8-4c6d-bd52-cf6b2d30e2ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311352 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311384 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311395 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311405 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311413 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311422 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqp7f\" (UniqueName: \"kubernetes.io/projected/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-kube-api-access-xqp7f\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.311431 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.460104 4870 generic.go:334] "Generic (PLEG): container finished" podID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerID="48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c" exitCode=0 Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.460138 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.460188 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" event={"ID":"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed","Type":"ContainerDied","Data":"48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c"} Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.460224 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8545fb859-qvd2l" event={"ID":"06541376-f5b8-4c6d-bd52-cf6b2d30e2ed","Type":"ContainerDied","Data":"7826d9b0a3ff0627f9617f1e48774768eeb63e9d54a8e156f621d77dbe1d82e2"} Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.460246 4870 scope.go:117] "RemoveContainer" containerID="48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.482563 4870 scope.go:117] "RemoveContainer" containerID="aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.490429 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-qvd2l"] Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.500943 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8545fb859-qvd2l"] Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.509292 4870 scope.go:117] "RemoveContainer" containerID="48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c" Jan 30 08:33:35 crc kubenswrapper[4870]: E0130 08:33:35.510412 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c\": container with ID starting with 48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c not found: ID does not exist" containerID="48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.510492 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c"} err="failed to get container status \"48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c\": rpc error: code = NotFound desc = could not find container \"48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c\": container with ID starting with 48d93727034361b344ad973bd73104fc4515555aaf6f73c02cf9dc34de44151c not found: ID does not exist" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.510583 4870 scope.go:117] "RemoveContainer" containerID="aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa" Jan 30 08:33:35 crc kubenswrapper[4870]: E0130 08:33:35.511035 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa\": container with ID starting with aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa not found: ID does not exist" containerID="aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa" Jan 30 08:33:35 crc kubenswrapper[4870]: I0130 08:33:35.511104 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa"} err="failed to get container status \"aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa\": rpc error: code = NotFound desc = could not find container \"aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa\": container with ID starting with aa7fda4abf155c21a53cf2f98e31d5c8a03cb75e77e5b68e71924e2b6539bcaa not found: ID does not exist" Jan 30 08:33:36 crc kubenswrapper[4870]: I0130 08:33:36.086024 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" path="/var/lib/kubelet/pods/06541376-f5b8-4c6d-bd52-cf6b2d30e2ed/volumes" Jan 30 08:33:37 crc kubenswrapper[4870]: I0130 08:33:37.484300 4870 generic.go:334] "Generic (PLEG): container finished" podID="2575ea2c-dc22-4ca2-bf0b-d67eaa330832" containerID="b54d083e49539914abb80a09d56280e72de1f73b7b8543555a5595e346f4fb9e" exitCode=0 Jan 30 08:33:37 crc kubenswrapper[4870]: I0130 08:33:37.484413 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2575ea2c-dc22-4ca2-bf0b-d67eaa330832","Type":"ContainerDied","Data":"b54d083e49539914abb80a09d56280e72de1f73b7b8543555a5595e346f4fb9e"} Jan 30 08:33:37 crc kubenswrapper[4870]: I0130 08:33:37.487720 4870 generic.go:334] "Generic (PLEG): container finished" podID="bf05f72e-aa42-4296-a7dc-8b742d6e0aab" containerID="3624472f52ac7be1319516a8ee600eb767c9e0a446d907875f2e5857dd2b649f" exitCode=0 Jan 30 08:33:37 crc kubenswrapper[4870]: I0130 08:33:37.487763 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf05f72e-aa42-4296-a7dc-8b742d6e0aab","Type":"ContainerDied","Data":"3624472f52ac7be1319516a8ee600eb767c9e0a446d907875f2e5857dd2b649f"} Jan 30 08:33:38 crc kubenswrapper[4870]: I0130 08:33:38.498786 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf05f72e-aa42-4296-a7dc-8b742d6e0aab","Type":"ContainerStarted","Data":"db147700e9462fac8000f8f140a1d336d90dd98b395146a598c0eb481a3983a5"} Jan 30 08:33:38 crc kubenswrapper[4870]: I0130 08:33:38.499404 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 08:33:38 crc kubenswrapper[4870]: I0130 08:33:38.501938 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2575ea2c-dc22-4ca2-bf0b-d67eaa330832","Type":"ContainerStarted","Data":"f76a1a1abe0c8909c0ecbc74f8237bab4820a439c1adbd52cfdc4e24d255c330"} Jan 30 08:33:38 crc kubenswrapper[4870]: I0130 08:33:38.502191 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:33:38 crc kubenswrapper[4870]: I0130 08:33:38.531002 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.530983133 podStartE2EDuration="36.530983133s" podCreationTimestamp="2026-01-30 08:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:33:38.52516893 +0000 UTC m=+1457.220716069" watchObservedRunningTime="2026-01-30 08:33:38.530983133 +0000 UTC m=+1457.226530262" Jan 30 08:33:38 crc kubenswrapper[4870]: I0130 08:33:38.555702 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.555683787 podStartE2EDuration="36.555683787s" podCreationTimestamp="2026-01-30 08:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:33:38.551487356 +0000 UTC m=+1457.247034465" watchObservedRunningTime="2026-01-30 08:33:38.555683787 +0000 UTC m=+1457.251230906" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.842293 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm"] Jan 30 08:33:46 crc kubenswrapper[4870]: E0130 08:33:46.843660 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerName="init" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.843679 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerName="init" Jan 30 08:33:46 crc kubenswrapper[4870]: E0130 08:33:46.843703 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerName="dnsmasq-dns" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.843712 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerName="dnsmasq-dns" Jan 30 08:33:46 crc kubenswrapper[4870]: E0130 08:33:46.843729 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerName="dnsmasq-dns" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.843737 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerName="dnsmasq-dns" Jan 30 08:33:46 crc kubenswrapper[4870]: E0130 08:33:46.843769 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerName="init" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.843780 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerName="init" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.844059 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0f9be1-926a-4340-9f05-ba673e3e471e" containerName="dnsmasq-dns" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.844095 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="06541376-f5b8-4c6d-bd52-cf6b2d30e2ed" containerName="dnsmasq-dns" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.844998 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.850210 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.854619 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.855044 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.855600 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.875136 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm"] Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.951113 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.951516 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.951768 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:46 crc kubenswrapper[4870]: I0130 08:33:46.951833 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5sg\" (UniqueName: \"kubernetes.io/projected/68089c9f-f566-4e65-b2ea-dd65a4d9012c-kube-api-access-sq5sg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.053530 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.053633 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.053664 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.053691 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5sg\" (UniqueName: \"kubernetes.io/projected/68089c9f-f566-4e65-b2ea-dd65a4d9012c-kube-api-access-sq5sg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.059205 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.061327 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.069772 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.070507 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5sg\" (UniqueName: \"kubernetes.io/projected/68089c9f-f566-4e65-b2ea-dd65a4d9012c-kube-api-access-sq5sg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.188701 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:33:47 crc kubenswrapper[4870]: W0130 08:33:47.907475 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68089c9f_f566_4e65_b2ea_dd65a4d9012c.slice/crio-8a1331f6e7eb347ff4b3857fd9acf81ba2e56b9d245819fa7e1216f5d37fa0b6 WatchSource:0}: Error finding container 8a1331f6e7eb347ff4b3857fd9acf81ba2e56b9d245819fa7e1216f5d37fa0b6: Status 404 returned error can't find the container with id 8a1331f6e7eb347ff4b3857fd9acf81ba2e56b9d245819fa7e1216f5d37fa0b6 Jan 30 08:33:47 crc kubenswrapper[4870]: I0130 08:33:47.911963 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm"] Jan 30 08:33:48 crc kubenswrapper[4870]: I0130 08:33:48.596992 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" event={"ID":"68089c9f-f566-4e65-b2ea-dd65a4d9012c","Type":"ContainerStarted","Data":"8a1331f6e7eb347ff4b3857fd9acf81ba2e56b9d245819fa7e1216f5d37fa0b6"} Jan 30 08:33:52 crc kubenswrapper[4870]: I0130 08:33:52.663017 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="bf05f72e-aa42-4296-a7dc-8b742d6e0aab" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.231:5671: connect: connection refused" Jan 30 08:33:52 crc kubenswrapper[4870]: I0130 08:33:52.782156 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:34:00 crc kubenswrapper[4870]: I0130 08:34:00.715446 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" event={"ID":"68089c9f-f566-4e65-b2ea-dd65a4d9012c","Type":"ContainerStarted","Data":"229d0b4cb2361e681eec79909ce30ca976b76a97bc610f98c8041877c395c51a"} Jan 30 08:34:00 crc kubenswrapper[4870]: I0130 08:34:00.745401 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" podStartSLOduration=2.765465949 podStartE2EDuration="14.745372799s" podCreationTimestamp="2026-01-30 08:33:46 +0000 UTC" firstStartedPulling="2026-01-30 08:33:47.915165955 +0000 UTC m=+1466.610713054" lastFinishedPulling="2026-01-30 08:33:59.895072795 +0000 UTC m=+1478.590619904" observedRunningTime="2026-01-30 08:34:00.731585116 +0000 UTC m=+1479.427132245" watchObservedRunningTime="2026-01-30 08:34:00.745372799 +0000 UTC m=+1479.440919928" Jan 30 08:34:02 crc kubenswrapper[4870]: I0130 08:34:02.661899 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 08:34:12 crc kubenswrapper[4870]: I0130 08:34:12.862905 4870 generic.go:334] "Generic (PLEG): container finished" podID="68089c9f-f566-4e65-b2ea-dd65a4d9012c" containerID="229d0b4cb2361e681eec79909ce30ca976b76a97bc610f98c8041877c395c51a" exitCode=0 Jan 30 08:34:12 crc kubenswrapper[4870]: I0130 08:34:12.863066 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" event={"ID":"68089c9f-f566-4e65-b2ea-dd65a4d9012c","Type":"ContainerDied","Data":"229d0b4cb2361e681eec79909ce30ca976b76a97bc610f98c8041877c395c51a"} Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.290962 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.438614 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq5sg\" (UniqueName: \"kubernetes.io/projected/68089c9f-f566-4e65-b2ea-dd65a4d9012c-kube-api-access-sq5sg\") pod \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.438798 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-repo-setup-combined-ca-bundle\") pod \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.438841 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-ssh-key-openstack-edpm-ipam\") pod \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.438928 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-inventory\") pod \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\" (UID: \"68089c9f-f566-4e65-b2ea-dd65a4d9012c\") " Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.445838 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68089c9f-f566-4e65-b2ea-dd65a4d9012c-kube-api-access-sq5sg" (OuterVolumeSpecName: "kube-api-access-sq5sg") pod "68089c9f-f566-4e65-b2ea-dd65a4d9012c" (UID: "68089c9f-f566-4e65-b2ea-dd65a4d9012c"). InnerVolumeSpecName "kube-api-access-sq5sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.446585 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "68089c9f-f566-4e65-b2ea-dd65a4d9012c" (UID: "68089c9f-f566-4e65-b2ea-dd65a4d9012c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.468775 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-inventory" (OuterVolumeSpecName: "inventory") pod "68089c9f-f566-4e65-b2ea-dd65a4d9012c" (UID: "68089c9f-f566-4e65-b2ea-dd65a4d9012c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.491038 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68089c9f-f566-4e65-b2ea-dd65a4d9012c" (UID: "68089c9f-f566-4e65-b2ea-dd65a4d9012c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.543705 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq5sg\" (UniqueName: \"kubernetes.io/projected/68089c9f-f566-4e65-b2ea-dd65a4d9012c-kube-api-access-sq5sg\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.543766 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.543785 4870 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.543805 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68089c9f-f566-4e65-b2ea-dd65a4d9012c-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.893625 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" event={"ID":"68089c9f-f566-4e65-b2ea-dd65a4d9012c","Type":"ContainerDied","Data":"8a1331f6e7eb347ff4b3857fd9acf81ba2e56b9d245819fa7e1216f5d37fa0b6"} Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.894033 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1331f6e7eb347ff4b3857fd9acf81ba2e56b9d245819fa7e1216f5d37fa0b6" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.893677 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.988306 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48"] Jan 30 08:34:14 crc kubenswrapper[4870]: E0130 08:34:14.988738 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68089c9f-f566-4e65-b2ea-dd65a4d9012c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.988758 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="68089c9f-f566-4e65-b2ea-dd65a4d9012c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.989061 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="68089c9f-f566-4e65-b2ea-dd65a4d9012c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.989841 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.992124 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.992182 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.992701 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.993151 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:34:14 crc kubenswrapper[4870]: I0130 08:34:14.999747 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48"] Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.156209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fkhs\" (UniqueName: \"kubernetes.io/projected/c22cad0f-b909-42fa-95c5-2536e1105161-kube-api-access-4fkhs\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.156507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.156561 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.259506 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fkhs\" (UniqueName: \"kubernetes.io/projected/c22cad0f-b909-42fa-95c5-2536e1105161-kube-api-access-4fkhs\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.259633 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.259690 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.266634 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.266946 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.288398 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fkhs\" (UniqueName: \"kubernetes.io/projected/c22cad0f-b909-42fa-95c5-2536e1105161-kube-api-access-4fkhs\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fpd48\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.350145 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.886143 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48"] Jan 30 08:34:15 crc kubenswrapper[4870]: I0130 08:34:15.904658 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" event={"ID":"c22cad0f-b909-42fa-95c5-2536e1105161","Type":"ContainerStarted","Data":"487f115d8a470a1ae56a3e31ff4d51daf55d472d8271d7644a3fc821dc96f45b"} Jan 30 08:34:16 crc kubenswrapper[4870]: I0130 08:34:16.918463 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" event={"ID":"c22cad0f-b909-42fa-95c5-2536e1105161","Type":"ContainerStarted","Data":"d9ef039ab8c2e568b2b52b85afceb2fe4365ed15ae540b3a7b48e6bc4512bd56"} Jan 30 08:34:16 crc kubenswrapper[4870]: I0130 08:34:16.936649 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" podStartSLOduration=2.5014558730000003 podStartE2EDuration="2.9366318s" podCreationTimestamp="2026-01-30 08:34:14 +0000 UTC" firstStartedPulling="2026-01-30 08:34:15.888312026 +0000 UTC m=+1494.583859135" lastFinishedPulling="2026-01-30 08:34:16.323487953 +0000 UTC m=+1495.019035062" observedRunningTime="2026-01-30 08:34:16.931823348 +0000 UTC m=+1495.627370457" watchObservedRunningTime="2026-01-30 08:34:16.9366318 +0000 UTC m=+1495.632178909" Jan 30 08:34:19 crc kubenswrapper[4870]: I0130 08:34:19.952702 4870 generic.go:334] "Generic (PLEG): container finished" podID="c22cad0f-b909-42fa-95c5-2536e1105161" containerID="d9ef039ab8c2e568b2b52b85afceb2fe4365ed15ae540b3a7b48e6bc4512bd56" exitCode=0 Jan 30 08:34:19 crc kubenswrapper[4870]: I0130 08:34:19.952791 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" event={"ID":"c22cad0f-b909-42fa-95c5-2536e1105161","Type":"ContainerDied","Data":"d9ef039ab8c2e568b2b52b85afceb2fe4365ed15ae540b3a7b48e6bc4512bd56"} Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.444765 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.594013 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-ssh-key-openstack-edpm-ipam\") pod \"c22cad0f-b909-42fa-95c5-2536e1105161\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.594111 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-inventory\") pod \"c22cad0f-b909-42fa-95c5-2536e1105161\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.594357 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fkhs\" (UniqueName: \"kubernetes.io/projected/c22cad0f-b909-42fa-95c5-2536e1105161-kube-api-access-4fkhs\") pod \"c22cad0f-b909-42fa-95c5-2536e1105161\" (UID: \"c22cad0f-b909-42fa-95c5-2536e1105161\") " Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.598929 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22cad0f-b909-42fa-95c5-2536e1105161-kube-api-access-4fkhs" (OuterVolumeSpecName: "kube-api-access-4fkhs") pod "c22cad0f-b909-42fa-95c5-2536e1105161" (UID: "c22cad0f-b909-42fa-95c5-2536e1105161"). InnerVolumeSpecName "kube-api-access-4fkhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.621496 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c22cad0f-b909-42fa-95c5-2536e1105161" (UID: "c22cad0f-b909-42fa-95c5-2536e1105161"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.621938 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-inventory" (OuterVolumeSpecName: "inventory") pod "c22cad0f-b909-42fa-95c5-2536e1105161" (UID: "c22cad0f-b909-42fa-95c5-2536e1105161"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.696918 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.696956 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c22cad0f-b909-42fa-95c5-2536e1105161-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.696972 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fkhs\" (UniqueName: \"kubernetes.io/projected/c22cad0f-b909-42fa-95c5-2536e1105161-kube-api-access-4fkhs\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.976164 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" event={"ID":"c22cad0f-b909-42fa-95c5-2536e1105161","Type":"ContainerDied","Data":"487f115d8a470a1ae56a3e31ff4d51daf55d472d8271d7644a3fc821dc96f45b"} Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.976215 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="487f115d8a470a1ae56a3e31ff4d51daf55d472d8271d7644a3fc821dc96f45b" Jan 30 08:34:21 crc kubenswrapper[4870]: I0130 08:34:21.976215 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fpd48" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.052982 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c"] Jan 30 08:34:22 crc kubenswrapper[4870]: E0130 08:34:22.053523 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22cad0f-b909-42fa-95c5-2536e1105161" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.053545 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22cad0f-b909-42fa-95c5-2536e1105161" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.053821 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22cad0f-b909-42fa-95c5-2536e1105161" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.054920 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.059376 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.059498 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.059622 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.059782 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.069268 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c"] Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.213048 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.213177 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725dm\" (UniqueName: \"kubernetes.io/projected/620aba2c-f389-4fc9-a27c-28c937894f7d-kube-api-access-725dm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.213257 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.213325 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.315005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-725dm\" (UniqueName: \"kubernetes.io/projected/620aba2c-f389-4fc9-a27c-28c937894f7d-kube-api-access-725dm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.315336 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.315472 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.315564 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.322229 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.322230 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.329048 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.349182 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-725dm\" (UniqueName: \"kubernetes.io/projected/620aba2c-f389-4fc9-a27c-28c937894f7d-kube-api-access-725dm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.384896 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:34:22 crc kubenswrapper[4870]: I0130 08:34:22.968787 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c"] Jan 30 08:34:23 crc kubenswrapper[4870]: I0130 08:34:23.001501 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" event={"ID":"620aba2c-f389-4fc9-a27c-28c937894f7d","Type":"ContainerStarted","Data":"5bf4e90c0f45b9a3ea6282a3aabea041683d9fdd400c0367a4321598c264f32e"} Jan 30 08:34:25 crc kubenswrapper[4870]: I0130 08:34:25.249570 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:34:25 crc kubenswrapper[4870]: I0130 08:34:25.250139 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:34:26 crc kubenswrapper[4870]: I0130 08:34:26.029138 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" event={"ID":"620aba2c-f389-4fc9-a27c-28c937894f7d","Type":"ContainerStarted","Data":"9059f589d52c73a92e30dc6901a5f313013722880c7f17d79aebc9067dcd7fa9"} Jan 30 08:34:26 crc kubenswrapper[4870]: I0130 08:34:26.047458 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" podStartSLOduration=1.307072706 podStartE2EDuration="4.047440419s" podCreationTimestamp="2026-01-30 08:34:22 +0000 UTC" firstStartedPulling="2026-01-30 08:34:22.975006613 +0000 UTC m=+1501.670553762" lastFinishedPulling="2026-01-30 08:34:25.715374346 +0000 UTC m=+1504.410921475" observedRunningTime="2026-01-30 08:34:26.046209211 +0000 UTC m=+1504.741756330" watchObservedRunningTime="2026-01-30 08:34:26.047440419 +0000 UTC m=+1504.742987538" Jan 30 08:34:35 crc kubenswrapper[4870]: I0130 08:34:35.250879 4870 scope.go:117] "RemoveContainer" containerID="3cc3794f576037b7275283832f1fcd12d44b3421b4fb40fee74fe7e2b82882e4" Jan 30 08:34:35 crc kubenswrapper[4870]: I0130 08:34:35.287622 4870 scope.go:117] "RemoveContainer" containerID="5ab755a04f7323eeb1d2f9974fb01380d8fa7fa3cba2f17b2589a502cb34ef24" Jan 30 08:34:55 crc kubenswrapper[4870]: I0130 08:34:55.249735 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:34:55 crc kubenswrapper[4870]: I0130 08:34:55.250440 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.250177 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.250858 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.250945 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.251905 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.251981 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" gracePeriod=600 Jan 30 08:35:25 crc kubenswrapper[4870]: E0130 08:35:25.376053 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.705944 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" exitCode=0 Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.706027 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49"} Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.706100 4870 scope.go:117] "RemoveContainer" containerID="fed0dc1b3541c4793a049fc7617c3773e2d05f0ebfb934d3acc5ededede3b844" Jan 30 08:35:25 crc kubenswrapper[4870]: I0130 08:35:25.707446 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:35:25 crc kubenswrapper[4870]: E0130 08:35:25.707911 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:35:35 crc kubenswrapper[4870]: I0130 08:35:35.385049 4870 scope.go:117] "RemoveContainer" containerID="8e106b0c6b2ed513250f13c043895b69dbe1cd77d36b5ecd4e47e2f2226b112e" Jan 30 08:35:35 crc kubenswrapper[4870]: I0130 08:35:35.418213 4870 scope.go:117] "RemoveContainer" containerID="b1d3aae9bf64c7d5adb6c7a0c0cef4cbd05ceee79bf2bdc26b1676e0ef8ac7ff" Jan 30 08:35:36 crc kubenswrapper[4870]: I0130 08:35:36.074755 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:35:36 crc kubenswrapper[4870]: E0130 08:35:36.075126 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.075134 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:35:51 crc kubenswrapper[4870]: E0130 08:35:51.077571 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.208868 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nqdjn"] Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.216547 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.236335 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqdjn"] Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.311974 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-catalog-content\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.312051 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhrq\" (UniqueName: \"kubernetes.io/projected/e0a93ca5-633c-4649-b23a-38f6ad85457c-kube-api-access-dvhrq\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.312154 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-utilities\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.415384 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-catalog-content\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.415445 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhrq\" (UniqueName: \"kubernetes.io/projected/e0a93ca5-633c-4649-b23a-38f6ad85457c-kube-api-access-dvhrq\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.415505 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-utilities\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.415966 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-catalog-content\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.416114 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-utilities\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.461909 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhrq\" (UniqueName: \"kubernetes.io/projected/e0a93ca5-633c-4649-b23a-38f6ad85457c-kube-api-access-dvhrq\") pod \"community-operators-nqdjn\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:51 crc kubenswrapper[4870]: I0130 08:35:51.540624 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:35:52 crc kubenswrapper[4870]: I0130 08:35:52.051378 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqdjn"] Jan 30 08:35:53 crc kubenswrapper[4870]: I0130 08:35:53.015958 4870 generic.go:334] "Generic (PLEG): container finished" podID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerID="5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0" exitCode=0 Jan 30 08:35:53 crc kubenswrapper[4870]: I0130 08:35:53.016048 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerDied","Data":"5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0"} Jan 30 08:35:53 crc kubenswrapper[4870]: I0130 08:35:53.016261 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerStarted","Data":"9d39a058afc74a56bfb3df796b94ad3e8258ca3af82e61dd9d6b307c12ee9260"} Jan 30 08:35:53 crc kubenswrapper[4870]: I0130 08:35:53.021868 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:35:54 crc kubenswrapper[4870]: I0130 08:35:54.028441 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerStarted","Data":"346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839"} Jan 30 08:35:55 crc kubenswrapper[4870]: I0130 08:35:55.041262 4870 generic.go:334] "Generic (PLEG): container finished" podID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerID="346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839" exitCode=0 Jan 30 08:35:55 crc kubenswrapper[4870]: I0130 08:35:55.041331 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerDied","Data":"346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839"} Jan 30 08:35:56 crc kubenswrapper[4870]: I0130 08:35:56.053581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerStarted","Data":"c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13"} Jan 30 08:35:56 crc kubenswrapper[4870]: I0130 08:35:56.079255 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nqdjn" podStartSLOduration=2.64546292 podStartE2EDuration="5.079235179s" podCreationTimestamp="2026-01-30 08:35:51 +0000 UTC" firstStartedPulling="2026-01-30 08:35:53.021632497 +0000 UTC m=+1591.717179596" lastFinishedPulling="2026-01-30 08:35:55.455404746 +0000 UTC m=+1594.150951855" observedRunningTime="2026-01-30 08:35:56.069691873 +0000 UTC m=+1594.765239012" watchObservedRunningTime="2026-01-30 08:35:56.079235179 +0000 UTC m=+1594.774782298" Jan 30 08:36:01 crc kubenswrapper[4870]: I0130 08:36:01.540986 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:36:01 crc kubenswrapper[4870]: I0130 08:36:01.541621 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:36:01 crc kubenswrapper[4870]: I0130 08:36:01.607069 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:36:02 crc kubenswrapper[4870]: I0130 08:36:02.081565 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:36:02 crc kubenswrapper[4870]: E0130 08:36:02.082084 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:36:02 crc kubenswrapper[4870]: I0130 08:36:02.171748 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:36:02 crc kubenswrapper[4870]: I0130 08:36:02.243774 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nqdjn"] Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.145675 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nqdjn" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="registry-server" containerID="cri-o://c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13" gracePeriod=2 Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.622463 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.706380 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-utilities\") pod \"e0a93ca5-633c-4649-b23a-38f6ad85457c\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.706500 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-catalog-content\") pod \"e0a93ca5-633c-4649-b23a-38f6ad85457c\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.706593 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhrq\" (UniqueName: \"kubernetes.io/projected/e0a93ca5-633c-4649-b23a-38f6ad85457c-kube-api-access-dvhrq\") pod \"e0a93ca5-633c-4649-b23a-38f6ad85457c\" (UID: \"e0a93ca5-633c-4649-b23a-38f6ad85457c\") " Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.708436 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-utilities" (OuterVolumeSpecName: "utilities") pod "e0a93ca5-633c-4649-b23a-38f6ad85457c" (UID: "e0a93ca5-633c-4649-b23a-38f6ad85457c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.712084 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a93ca5-633c-4649-b23a-38f6ad85457c-kube-api-access-dvhrq" (OuterVolumeSpecName: "kube-api-access-dvhrq") pod "e0a93ca5-633c-4649-b23a-38f6ad85457c" (UID: "e0a93ca5-633c-4649-b23a-38f6ad85457c"). InnerVolumeSpecName "kube-api-access-dvhrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.752664 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0a93ca5-633c-4649-b23a-38f6ad85457c" (UID: "e0a93ca5-633c-4649-b23a-38f6ad85457c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.809399 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhrq\" (UniqueName: \"kubernetes.io/projected/e0a93ca5-633c-4649-b23a-38f6ad85457c-kube-api-access-dvhrq\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.809438 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:04 crc kubenswrapper[4870]: I0130 08:36:04.809450 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93ca5-633c-4649-b23a-38f6ad85457c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.157774 4870 generic.go:334] "Generic (PLEG): container finished" podID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerID="c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13" exitCode=0 Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.157859 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqdjn" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.157947 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerDied","Data":"c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13"} Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.158293 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqdjn" event={"ID":"e0a93ca5-633c-4649-b23a-38f6ad85457c","Type":"ContainerDied","Data":"9d39a058afc74a56bfb3df796b94ad3e8258ca3af82e61dd9d6b307c12ee9260"} Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.158329 4870 scope.go:117] "RemoveContainer" containerID="c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.191476 4870 scope.go:117] "RemoveContainer" containerID="346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.201046 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nqdjn"] Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.212951 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nqdjn"] Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.226279 4870 scope.go:117] "RemoveContainer" containerID="5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.289731 4870 scope.go:117] "RemoveContainer" containerID="c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13" Jan 30 08:36:05 crc kubenswrapper[4870]: E0130 08:36:05.290166 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13\": container with ID starting with c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13 not found: ID does not exist" containerID="c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.290205 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13"} err="failed to get container status \"c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13\": rpc error: code = NotFound desc = could not find container \"c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13\": container with ID starting with c35189f24773e7e5586f4dab8b7ae4b1c1e1a8d3835f49323c3e68c922681e13 not found: ID does not exist" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.290230 4870 scope.go:117] "RemoveContainer" containerID="346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839" Jan 30 08:36:05 crc kubenswrapper[4870]: E0130 08:36:05.290666 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839\": container with ID starting with 346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839 not found: ID does not exist" containerID="346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.290693 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839"} err="failed to get container status \"346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839\": rpc error: code = NotFound desc = could not find container \"346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839\": container with ID starting with 346f08667613a1597067cc9e0b8f0190c7c39e39ac5f3b57bc30e1bd745cd839 not found: ID does not exist" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.290712 4870 scope.go:117] "RemoveContainer" containerID="5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0" Jan 30 08:36:05 crc kubenswrapper[4870]: E0130 08:36:05.291063 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0\": container with ID starting with 5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0 not found: ID does not exist" containerID="5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0" Jan 30 08:36:05 crc kubenswrapper[4870]: I0130 08:36:05.291087 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0"} err="failed to get container status \"5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0\": rpc error: code = NotFound desc = could not find container \"5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0\": container with ID starting with 5f1c509eeee8b7481f1c459ae79215514f74b744ea680d7d513e15629ff5f1d0 not found: ID does not exist" Jan 30 08:36:06 crc kubenswrapper[4870]: I0130 08:36:06.089843 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" path="/var/lib/kubelet/pods/e0a93ca5-633c-4649-b23a-38f6ad85457c/volumes" Jan 30 08:36:15 crc kubenswrapper[4870]: I0130 08:36:15.077637 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:36:15 crc kubenswrapper[4870]: E0130 08:36:15.078320 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:36:26 crc kubenswrapper[4870]: I0130 08:36:26.075490 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:36:26 crc kubenswrapper[4870]: E0130 08:36:26.076536 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.537241 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwzrt"] Jan 30 08:36:30 crc kubenswrapper[4870]: E0130 08:36:30.538360 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="extract-content" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.538378 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="extract-content" Jan 30 08:36:30 crc kubenswrapper[4870]: E0130 08:36:30.538399 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="registry-server" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.539261 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="registry-server" Jan 30 08:36:30 crc kubenswrapper[4870]: E0130 08:36:30.539310 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="extract-utilities" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.539320 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="extract-utilities" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.539561 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a93ca5-633c-4649-b23a-38f6ad85457c" containerName="registry-server" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.541358 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.564112 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwzrt"] Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.576055 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-catalog-content\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.576094 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9wgd\" (UniqueName: \"kubernetes.io/projected/6a6d20bc-8755-40fc-a830-91f52584145f-kube-api-access-x9wgd\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.576120 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-utilities\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.677955 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-catalog-content\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.678032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9wgd\" (UniqueName: \"kubernetes.io/projected/6a6d20bc-8755-40fc-a830-91f52584145f-kube-api-access-x9wgd\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.678076 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-utilities\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.678555 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-catalog-content\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.678602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-utilities\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.710852 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9wgd\" (UniqueName: \"kubernetes.io/projected/6a6d20bc-8755-40fc-a830-91f52584145f-kube-api-access-x9wgd\") pod \"certified-operators-vwzrt\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:30 crc kubenswrapper[4870]: I0130 08:36:30.874415 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:31 crc kubenswrapper[4870]: I0130 08:36:31.408701 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwzrt"] Jan 30 08:36:31 crc kubenswrapper[4870]: W0130 08:36:31.420735 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6d20bc_8755_40fc_a830_91f52584145f.slice/crio-6ed1107024871effb54a515b5ce8f15e0ec80ecb07f4b73d435b44ce270b6360 WatchSource:0}: Error finding container 6ed1107024871effb54a515b5ce8f15e0ec80ecb07f4b73d435b44ce270b6360: Status 404 returned error can't find the container with id 6ed1107024871effb54a515b5ce8f15e0ec80ecb07f4b73d435b44ce270b6360 Jan 30 08:36:32 crc kubenswrapper[4870]: I0130 08:36:32.445573 4870 generic.go:334] "Generic (PLEG): container finished" podID="6a6d20bc-8755-40fc-a830-91f52584145f" containerID="4829db839cbadba4b50ba838a0028137cd5bae1a5e995ea792f030384fd924d0" exitCode=0 Jan 30 08:36:32 crc kubenswrapper[4870]: I0130 08:36:32.445742 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerDied","Data":"4829db839cbadba4b50ba838a0028137cd5bae1a5e995ea792f030384fd924d0"} Jan 30 08:36:32 crc kubenswrapper[4870]: I0130 08:36:32.446108 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerStarted","Data":"6ed1107024871effb54a515b5ce8f15e0ec80ecb07f4b73d435b44ce270b6360"} Jan 30 08:36:33 crc kubenswrapper[4870]: I0130 08:36:33.459680 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerStarted","Data":"ebba5706fbb7a13afe99283a09c87f3f21b1fffa0593913b311f354c43a2956e"} Jan 30 08:36:34 crc kubenswrapper[4870]: I0130 08:36:34.468519 4870 generic.go:334] "Generic (PLEG): container finished" podID="6a6d20bc-8755-40fc-a830-91f52584145f" containerID="ebba5706fbb7a13afe99283a09c87f3f21b1fffa0593913b311f354c43a2956e" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4870]: I0130 08:36:34.468603 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerDied","Data":"ebba5706fbb7a13afe99283a09c87f3f21b1fffa0593913b311f354c43a2956e"} Jan 30 08:36:35 crc kubenswrapper[4870]: I0130 08:36:35.538726 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerStarted","Data":"680e1b239b6bc11690a9df9dd3a6ce4a2ed253f413cc48929dead689ad8e2fb9"} Jan 30 08:36:35 crc kubenswrapper[4870]: I0130 08:36:35.567409 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwzrt" podStartSLOduration=3.081184265 podStartE2EDuration="5.567387982s" podCreationTimestamp="2026-01-30 08:36:30 +0000 UTC" firstStartedPulling="2026-01-30 08:36:32.450633323 +0000 UTC m=+1631.146180472" lastFinishedPulling="2026-01-30 08:36:34.93683708 +0000 UTC m=+1633.632384189" observedRunningTime="2026-01-30 08:36:35.557507895 +0000 UTC m=+1634.253055004" watchObservedRunningTime="2026-01-30 08:36:35.567387982 +0000 UTC m=+1634.262935091" Jan 30 08:36:38 crc kubenswrapper[4870]: I0130 08:36:38.074640 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:36:38 crc kubenswrapper[4870]: E0130 08:36:38.075478 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:36:40 crc kubenswrapper[4870]: I0130 08:36:40.875627 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:40 crc kubenswrapper[4870]: I0130 08:36:40.876056 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:40 crc kubenswrapper[4870]: I0130 08:36:40.933520 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:41 crc kubenswrapper[4870]: I0130 08:36:41.686175 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:41 crc kubenswrapper[4870]: I0130 08:36:41.731837 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwzrt"] Jan 30 08:36:44 crc kubenswrapper[4870]: I0130 08:36:44.191422 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwzrt" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="registry-server" containerID="cri-o://680e1b239b6bc11690a9df9dd3a6ce4a2ed253f413cc48929dead689ad8e2fb9" gracePeriod=2 Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.204843 4870 generic.go:334] "Generic (PLEG): container finished" podID="6a6d20bc-8755-40fc-a830-91f52584145f" containerID="680e1b239b6bc11690a9df9dd3a6ce4a2ed253f413cc48929dead689ad8e2fb9" exitCode=0 Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.204902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerDied","Data":"680e1b239b6bc11690a9df9dd3a6ce4a2ed253f413cc48929dead689ad8e2fb9"} Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.430938 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.549163 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9wgd\" (UniqueName: \"kubernetes.io/projected/6a6d20bc-8755-40fc-a830-91f52584145f-kube-api-access-x9wgd\") pod \"6a6d20bc-8755-40fc-a830-91f52584145f\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.549231 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-catalog-content\") pod \"6a6d20bc-8755-40fc-a830-91f52584145f\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.549282 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-utilities\") pod \"6a6d20bc-8755-40fc-a830-91f52584145f\" (UID: \"6a6d20bc-8755-40fc-a830-91f52584145f\") " Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.550325 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-utilities" (OuterVolumeSpecName: "utilities") pod "6a6d20bc-8755-40fc-a830-91f52584145f" (UID: "6a6d20bc-8755-40fc-a830-91f52584145f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.555526 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6d20bc-8755-40fc-a830-91f52584145f-kube-api-access-x9wgd" (OuterVolumeSpecName: "kube-api-access-x9wgd") pod "6a6d20bc-8755-40fc-a830-91f52584145f" (UID: "6a6d20bc-8755-40fc-a830-91f52584145f"). InnerVolumeSpecName "kube-api-access-x9wgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.604680 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a6d20bc-8755-40fc-a830-91f52584145f" (UID: "6a6d20bc-8755-40fc-a830-91f52584145f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.652337 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9wgd\" (UniqueName: \"kubernetes.io/projected/6a6d20bc-8755-40fc-a830-91f52584145f-kube-api-access-x9wgd\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.652396 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:45 crc kubenswrapper[4870]: I0130 08:36:45.652410 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6d20bc-8755-40fc-a830-91f52584145f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.218632 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwzrt" event={"ID":"6a6d20bc-8755-40fc-a830-91f52584145f","Type":"ContainerDied","Data":"6ed1107024871effb54a515b5ce8f15e0ec80ecb07f4b73d435b44ce270b6360"} Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.219378 4870 scope.go:117] "RemoveContainer" containerID="680e1b239b6bc11690a9df9dd3a6ce4a2ed253f413cc48929dead689ad8e2fb9" Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.218807 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwzrt" Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.248207 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwzrt"] Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.250855 4870 scope.go:117] "RemoveContainer" containerID="ebba5706fbb7a13afe99283a09c87f3f21b1fffa0593913b311f354c43a2956e" Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.261930 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwzrt"] Jan 30 08:36:46 crc kubenswrapper[4870]: I0130 08:36:46.280519 4870 scope.go:117] "RemoveContainer" containerID="4829db839cbadba4b50ba838a0028137cd5bae1a5e995ea792f030384fd924d0" Jan 30 08:36:48 crc kubenswrapper[4870]: I0130 08:36:48.085357 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" path="/var/lib/kubelet/pods/6a6d20bc-8755-40fc-a830-91f52584145f/volumes" Jan 30 08:36:52 crc kubenswrapper[4870]: I0130 08:36:52.096022 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:36:52 crc kubenswrapper[4870]: E0130 08:36:52.096734 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:37:07 crc kubenswrapper[4870]: I0130 08:37:07.074769 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:37:07 crc kubenswrapper[4870]: E0130 08:37:07.075663 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.098802 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b24b-account-create-update-d2n4p"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.109713 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-be9b-account-create-update-lgqm6"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.122671 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b24b-account-create-update-d2n4p"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.136413 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-be9b-account-create-update-lgqm6"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.156511 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-772bw"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.164406 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zdg4s"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.172482 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zdg4s"] Jan 30 08:37:13 crc kubenswrapper[4870]: I0130 08:37:13.184073 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-772bw"] Jan 30 08:37:14 crc kubenswrapper[4870]: I0130 08:37:14.097565 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b66abfb-27d1-415e-abf2-2cb855a2bcaf" path="/var/lib/kubelet/pods/3b66abfb-27d1-415e-abf2-2cb855a2bcaf/volumes" Jan 30 08:37:14 crc kubenswrapper[4870]: I0130 08:37:14.098498 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac3a52d-4734-4be8-9530-6b7b535664f8" path="/var/lib/kubelet/pods/5ac3a52d-4734-4be8-9530-6b7b535664f8/volumes" Jan 30 08:37:14 crc kubenswrapper[4870]: I0130 08:37:14.099558 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93cd49cf-8353-49eb-89d2-2d3630503d9f" path="/var/lib/kubelet/pods/93cd49cf-8353-49eb-89d2-2d3630503d9f/volumes" Jan 30 08:37:14 crc kubenswrapper[4870]: I0130 08:37:14.100365 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e990d4f-b684-47e6-8056-08cf765aa33d" path="/var/lib/kubelet/pods/9e990d4f-b684-47e6-8056-08cf765aa33d/volumes" Jan 30 08:37:15 crc kubenswrapper[4870]: I0130 08:37:15.034231 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-a8a4-account-create-update-8gm2f"] Jan 30 08:37:15 crc kubenswrapper[4870]: I0130 08:37:15.047224 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-a8a4-account-create-update-8gm2f"] Jan 30 08:37:15 crc kubenswrapper[4870]: I0130 08:37:15.056488 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-x6s7d"] Jan 30 08:37:15 crc kubenswrapper[4870]: I0130 08:37:15.064511 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-x6s7d"] Jan 30 08:37:16 crc kubenswrapper[4870]: I0130 08:37:16.091430 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3" path="/var/lib/kubelet/pods/40a93c77-4e48-4cd1-b89b-6a7de4d9c5e3/volumes" Jan 30 08:37:16 crc kubenswrapper[4870]: I0130 08:37:16.093078 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585a2047-d3db-4822-89b3-52fcd65d6e09" path="/var/lib/kubelet/pods/585a2047-d3db-4822-89b3-52fcd65d6e09/volumes" Jan 30 08:37:21 crc kubenswrapper[4870]: I0130 08:37:21.041151 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cpgc6"] Jan 30 08:37:21 crc kubenswrapper[4870]: I0130 08:37:21.051393 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cpgc6"] Jan 30 08:37:22 crc kubenswrapper[4870]: I0130 08:37:22.086752 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:37:22 crc kubenswrapper[4870]: E0130 08:37:22.087273 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:37:22 crc kubenswrapper[4870]: I0130 08:37:22.087428 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc3ddf0-5fc8-4425-a434-1452753e1297" path="/var/lib/kubelet/pods/8bc3ddf0-5fc8-4425-a434-1452753e1297/volumes" Jan 30 08:37:34 crc kubenswrapper[4870]: I0130 08:37:34.074914 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:37:34 crc kubenswrapper[4870]: E0130 08:37:34.075742 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.514993 4870 scope.go:117] "RemoveContainer" containerID="7e7325618d20bdeab54c732bc7a397cb58a9db4a697a599a002533f4811bf8bd" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.551804 4870 scope.go:117] "RemoveContainer" containerID="1e43a638833e8a28b17503377129992ef8df2c8dae8700c2567db5f0ab6b74f9" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.605676 4870 scope.go:117] "RemoveContainer" containerID="6dec8f4d9911b49219f94545d1dff11226dd491baa26e53a02289cf2ce287699" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.647146 4870 scope.go:117] "RemoveContainer" containerID="cf022953959b6108a335c22e59a92909beccc351b6ace66848278caae812affb" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.699327 4870 scope.go:117] "RemoveContainer" containerID="e32e8f9eb095a0767af1259a467ea84160f17bae2cb726e02486629d03a26d33" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.744204 4870 scope.go:117] "RemoveContainer" containerID="78530e29e6f33fe9e6244539f845bfc30d3752986bcfd2b607b62cc6f7d5aab3" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.822082 4870 scope.go:117] "RemoveContainer" containerID="5726ace895a9d7102cc621cf411a4327a47995798d8abdba29b293b762399c80" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.845288 4870 scope.go:117] "RemoveContainer" containerID="895f5e0a2008516657010356d30e83d3b79850fdf910e5ede0c0b5280b3040c2" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.865697 4870 scope.go:117] "RemoveContainer" containerID="ae557205b83ba573012321c0b15a5b47277e108dca93d5acd055965c34b03da8" Jan 30 08:37:35 crc kubenswrapper[4870]: I0130 08:37:35.884127 4870 scope.go:117] "RemoveContainer" containerID="fa09eeaef0e8d067370ba4e9a769247437b75f3bbc783ef72b9b39a713b37db0" Jan 30 08:37:37 crc kubenswrapper[4870]: I0130 08:37:37.791228 4870 generic.go:334] "Generic (PLEG): container finished" podID="620aba2c-f389-4fc9-a27c-28c937894f7d" containerID="9059f589d52c73a92e30dc6901a5f313013722880c7f17d79aebc9067dcd7fa9" exitCode=0 Jan 30 08:37:37 crc kubenswrapper[4870]: I0130 08:37:37.791340 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" event={"ID":"620aba2c-f389-4fc9-a27c-28c937894f7d","Type":"ContainerDied","Data":"9059f589d52c73a92e30dc6901a5f313013722880c7f17d79aebc9067dcd7fa9"} Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.323696 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.423135 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-inventory\") pod \"620aba2c-f389-4fc9-a27c-28c937894f7d\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.423569 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-bootstrap-combined-ca-bundle\") pod \"620aba2c-f389-4fc9-a27c-28c937894f7d\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.423721 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-ssh-key-openstack-edpm-ipam\") pod \"620aba2c-f389-4fc9-a27c-28c937894f7d\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.423857 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-725dm\" (UniqueName: \"kubernetes.io/projected/620aba2c-f389-4fc9-a27c-28c937894f7d-kube-api-access-725dm\") pod \"620aba2c-f389-4fc9-a27c-28c937894f7d\" (UID: \"620aba2c-f389-4fc9-a27c-28c937894f7d\") " Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.430615 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620aba2c-f389-4fc9-a27c-28c937894f7d-kube-api-access-725dm" (OuterVolumeSpecName: "kube-api-access-725dm") pod "620aba2c-f389-4fc9-a27c-28c937894f7d" (UID: "620aba2c-f389-4fc9-a27c-28c937894f7d"). InnerVolumeSpecName "kube-api-access-725dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.431140 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "620aba2c-f389-4fc9-a27c-28c937894f7d" (UID: "620aba2c-f389-4fc9-a27c-28c937894f7d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.454090 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "620aba2c-f389-4fc9-a27c-28c937894f7d" (UID: "620aba2c-f389-4fc9-a27c-28c937894f7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.454473 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-inventory" (OuterVolumeSpecName: "inventory") pod "620aba2c-f389-4fc9-a27c-28c937894f7d" (UID: "620aba2c-f389-4fc9-a27c-28c937894f7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.526313 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-725dm\" (UniqueName: \"kubernetes.io/projected/620aba2c-f389-4fc9-a27c-28c937894f7d-kube-api-access-725dm\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.526351 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.526361 4870 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.526370 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aba2c-f389-4fc9-a27c-28c937894f7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.817038 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" event={"ID":"620aba2c-f389-4fc9-a27c-28c937894f7d","Type":"ContainerDied","Data":"5bf4e90c0f45b9a3ea6282a3aabea041683d9fdd400c0367a4321598c264f32e"} Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.817111 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf4e90c0f45b9a3ea6282a3aabea041683d9fdd400c0367a4321598c264f32e" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.817170 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.903955 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7"] Jan 30 08:37:39 crc kubenswrapper[4870]: E0130 08:37:39.904401 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="extract-utilities" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.904418 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="extract-utilities" Jan 30 08:37:39 crc kubenswrapper[4870]: E0130 08:37:39.904439 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620aba2c-f389-4fc9-a27c-28c937894f7d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.904447 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="620aba2c-f389-4fc9-a27c-28c937894f7d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 08:37:39 crc kubenswrapper[4870]: E0130 08:37:39.904466 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="extract-content" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.904472 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="extract-content" Jan 30 08:37:39 crc kubenswrapper[4870]: E0130 08:37:39.904498 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="registry-server" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.904504 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="registry-server" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.904660 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6d20bc-8755-40fc-a830-91f52584145f" containerName="registry-server" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.904698 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="620aba2c-f389-4fc9-a27c-28c937894f7d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.905353 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.921958 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.922246 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.922338 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.922563 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:37:39 crc kubenswrapper[4870]: I0130 08:37:39.925096 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7"] Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.041984 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.042329 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqt9\" (UniqueName: \"kubernetes.io/projected/9bef3cd3-94ab-486e-91de-c0ede57769d8-kube-api-access-gtqt9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.042437 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.042644 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kqrrr"] Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.051523 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6lzp5"] Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.060493 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kqrrr"] Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.069587 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6lzp5"] Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.088678 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d425622-da05-4988-a059-013c06b4ecf1" path="/var/lib/kubelet/pods/4d425622-da05-4988-a059-013c06b4ecf1/volumes" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.089492 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f46507-531f-4d06-86d9-6c07a50abc6d" path="/var/lib/kubelet/pods/59f46507-531f-4d06-86d9-6c07a50abc6d/volumes" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.144816 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqt9\" (UniqueName: \"kubernetes.io/projected/9bef3cd3-94ab-486e-91de-c0ede57769d8-kube-api-access-gtqt9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.144905 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.145012 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.149056 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.153570 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.168604 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqt9\" (UniqueName: \"kubernetes.io/projected/9bef3cd3-94ab-486e-91de-c0ede57769d8-kube-api-access-gtqt9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-p67q7\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.246599 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:37:40 crc kubenswrapper[4870]: I0130 08:37:40.823400 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7"] Jan 30 08:37:41 crc kubenswrapper[4870]: I0130 08:37:41.844218 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" event={"ID":"9bef3cd3-94ab-486e-91de-c0ede57769d8","Type":"ContainerStarted","Data":"626d115ac205077a01bbe8f25312875b05af3e1a0b1ae6dc536bf8f8aea4f69b"} Jan 30 08:37:42 crc kubenswrapper[4870]: I0130 08:37:42.864511 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" event={"ID":"9bef3cd3-94ab-486e-91de-c0ede57769d8","Type":"ContainerStarted","Data":"36acfc5bf960402457c7f8cc9040b5a8f64a76024ba8309a1355304bfe83c1d9"} Jan 30 08:37:42 crc kubenswrapper[4870]: I0130 08:37:42.892161 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" podStartSLOduration=2.85257085 podStartE2EDuration="3.89213968s" podCreationTimestamp="2026-01-30 08:37:39 +0000 UTC" firstStartedPulling="2026-01-30 08:37:40.827052008 +0000 UTC m=+1699.522599157" lastFinishedPulling="2026-01-30 08:37:41.866620868 +0000 UTC m=+1700.562167987" observedRunningTime="2026-01-30 08:37:42.88928013 +0000 UTC m=+1701.584827249" watchObservedRunningTime="2026-01-30 08:37:42.89213968 +0000 UTC m=+1701.587686789" Jan 30 08:37:46 crc kubenswrapper[4870]: I0130 08:37:46.074720 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:37:46 crc kubenswrapper[4870]: E0130 08:37:46.075366 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.038354 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8td6r"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.047566 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xrsjh"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.057360 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9d1f-account-create-update-mffzg"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.067575 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6de9-account-create-update-nwcgl"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.083911 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8td6r"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.090176 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9d1f-account-create-update-mffzg"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.098529 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xrsjh"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.115280 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-937e-account-create-update-6w49r"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.122653 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6de9-account-create-update-nwcgl"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.130825 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-937e-account-create-update-6w49r"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.138227 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0515-account-create-update-rln5d"] Jan 30 08:37:48 crc kubenswrapper[4870]: I0130 08:37:48.146368 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0515-account-create-update-rln5d"] Jan 30 08:37:50 crc kubenswrapper[4870]: I0130 08:37:50.086124 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051874aa-a01e-40bf-a987-a830886ea878" path="/var/lib/kubelet/pods/051874aa-a01e-40bf-a987-a830886ea878/volumes" Jan 30 08:37:50 crc kubenswrapper[4870]: I0130 08:37:50.087374 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e1f740-4393-4ba2-8242-fb863196cb02" path="/var/lib/kubelet/pods/17e1f740-4393-4ba2-8242-fb863196cb02/volumes" Jan 30 08:37:50 crc kubenswrapper[4870]: I0130 08:37:50.088394 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19155d05-01da-4e21-96c2-f23662f8f785" path="/var/lib/kubelet/pods/19155d05-01da-4e21-96c2-f23662f8f785/volumes" Jan 30 08:37:50 crc kubenswrapper[4870]: I0130 08:37:50.089167 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6566e49-850d-460e-9a22-9bfd7384f494" path="/var/lib/kubelet/pods/b6566e49-850d-460e-9a22-9bfd7384f494/volumes" Jan 30 08:37:50 crc kubenswrapper[4870]: I0130 08:37:50.089807 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc35112-b552-434a-b702-26c53cbf5574" path="/var/lib/kubelet/pods/dfc35112-b552-434a-b702-26c53cbf5574/volumes" Jan 30 08:37:50 crc kubenswrapper[4870]: I0130 08:37:50.090456 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb61b735-bf9c-4bf5-a5cf-1948435af72e" path="/var/lib/kubelet/pods/eb61b735-bf9c-4bf5-a5cf-1948435af72e/volumes" Jan 30 08:37:53 crc kubenswrapper[4870]: I0130 08:37:53.049545 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-gbfzh"] Jan 30 08:37:53 crc kubenswrapper[4870]: I0130 08:37:53.057311 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-gbfzh"] Jan 30 08:37:54 crc kubenswrapper[4870]: I0130 08:37:54.039396 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-f6r68"] Jan 30 08:37:54 crc kubenswrapper[4870]: I0130 08:37:54.052971 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-f6r68"] Jan 30 08:37:54 crc kubenswrapper[4870]: I0130 08:37:54.085740 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881527d5-776b-4639-9306-895d1e370abd" path="/var/lib/kubelet/pods/881527d5-776b-4639-9306-895d1e370abd/volumes" Jan 30 08:37:54 crc kubenswrapper[4870]: I0130 08:37:54.093854 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8637667-8b7e-455e-8ba9-b6291574e4ce" path="/var/lib/kubelet/pods/e8637667-8b7e-455e-8ba9-b6291574e4ce/volumes" Jan 30 08:37:59 crc kubenswrapper[4870]: I0130 08:37:59.074598 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:37:59 crc kubenswrapper[4870]: E0130 08:37:59.075337 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:38:14 crc kubenswrapper[4870]: I0130 08:38:14.077838 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:38:14 crc kubenswrapper[4870]: E0130 08:38:14.078647 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:38:29 crc kubenswrapper[4870]: I0130 08:38:29.075532 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:38:29 crc kubenswrapper[4870]: E0130 08:38:29.077630 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.047080 4870 scope.go:117] "RemoveContainer" containerID="0e37f5a9ce757405b6d3e4a3c9aee5ca81c0dd18541f09603a2ca8623a81a084" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.111544 4870 scope.go:117] "RemoveContainer" containerID="ebb6defef32112bcd4f761a254fe06dd72ca1e2b11d0f09023e3983d12f747be" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.147018 4870 scope.go:117] "RemoveContainer" containerID="c7c08f4bc1bd775e569c12ce6f45113dd74be7d4b1436663db01b3cc4e31c119" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.214673 4870 scope.go:117] "RemoveContainer" containerID="a86a16c99cbecfe80af026afdf8bc6eec15eafc6658e69be7a63babe7a18aa00" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.267077 4870 scope.go:117] "RemoveContainer" containerID="c19fa8ba72448fbf848d632a4b2c87c38ba00d3573897003f02d36a9263593ff" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.308537 4870 scope.go:117] "RemoveContainer" containerID="cb21d97629ca986697c0491abe322efbe6175a053c82fd76648eaa5b827fb2cc" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.357001 4870 scope.go:117] "RemoveContainer" containerID="6dcb2a606401562e049d19a34d68af34e28fc99c34413a4f7cfddf60bc5211ee" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.376332 4870 scope.go:117] "RemoveContainer" containerID="96c828944b59ded4cdb603b725476894266a7134b9d788fbea6f5b49b309942a" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.394010 4870 scope.go:117] "RemoveContainer" containerID="b1933043ebcbf2051360c783e7b0fa2a563a6c4cee962802cf9d526f5fcd348c" Jan 30 08:38:36 crc kubenswrapper[4870]: I0130 08:38:36.416990 4870 scope.go:117] "RemoveContainer" containerID="ce2685881a857cd53a444b13c2f7aef4bd6f5c6b26f0a8cbc8a0c60a7f826c60" Jan 30 08:38:40 crc kubenswrapper[4870]: I0130 08:38:40.075383 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:38:40 crc kubenswrapper[4870]: E0130 08:38:40.076361 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:38:43 crc kubenswrapper[4870]: I0130 08:38:43.060991 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g4m9m"] Jan 30 08:38:43 crc kubenswrapper[4870]: I0130 08:38:43.069980 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g4m9m"] Jan 30 08:38:44 crc kubenswrapper[4870]: I0130 08:38:44.096729 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b91a69-f8ad-4d1d-a47d-c1921071c71a" path="/var/lib/kubelet/pods/b9b91a69-f8ad-4d1d-a47d-c1921071c71a/volumes" Jan 30 08:38:54 crc kubenswrapper[4870]: I0130 08:38:54.074846 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:38:54 crc kubenswrapper[4870]: E0130 08:38:54.077565 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:38:57 crc kubenswrapper[4870]: I0130 08:38:57.068649 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-b57k5"] Jan 30 08:38:57 crc kubenswrapper[4870]: I0130 08:38:57.080234 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-b57k5"] Jan 30 08:38:58 crc kubenswrapper[4870]: I0130 08:38:58.085963 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1435e0c6-e24a-44d4-bf78-3e5300e23cdd" path="/var/lib/kubelet/pods/1435e0c6-e24a-44d4-bf78-3e5300e23cdd/volumes" Jan 30 08:39:07 crc kubenswrapper[4870]: I0130 08:39:07.075217 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:39:07 crc kubenswrapper[4870]: E0130 08:39:07.075865 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:39:08 crc kubenswrapper[4870]: I0130 08:39:08.048973 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-d2mx7"] Jan 30 08:39:08 crc kubenswrapper[4870]: I0130 08:39:08.065092 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9g27p"] Jan 30 08:39:08 crc kubenswrapper[4870]: I0130 08:39:08.103479 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-d2mx7"] Jan 30 08:39:08 crc kubenswrapper[4870]: I0130 08:39:08.103537 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9g27p"] Jan 30 08:39:09 crc kubenswrapper[4870]: I0130 08:39:09.033860 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9mjj4"] Jan 30 08:39:09 crc kubenswrapper[4870]: I0130 08:39:09.043056 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9mjj4"] Jan 30 08:39:10 crc kubenswrapper[4870]: I0130 08:39:10.089810 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505df376-c8bc-44ce-9c14-8cf94730c550" path="/var/lib/kubelet/pods/505df376-c8bc-44ce-9c14-8cf94730c550/volumes" Jan 30 08:39:10 crc kubenswrapper[4870]: I0130 08:39:10.090623 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685bde78-dea1-4864-a825-af176178bd11" path="/var/lib/kubelet/pods/685bde78-dea1-4864-a825-af176178bd11/volumes" Jan 30 08:39:10 crc kubenswrapper[4870]: I0130 08:39:10.091598 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bd649e-5c3c-495f-933f-3b516167cbd2" path="/var/lib/kubelet/pods/c3bd649e-5c3c-495f-933f-3b516167cbd2/volumes" Jan 30 08:39:12 crc kubenswrapper[4870]: I0130 08:39:12.032488 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tssp8"] Jan 30 08:39:12 crc kubenswrapper[4870]: I0130 08:39:12.041432 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tssp8"] Jan 30 08:39:12 crc kubenswrapper[4870]: I0130 08:39:12.086138 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd09a42-14b6-4161-ba2a-82c4cf4f5983" path="/var/lib/kubelet/pods/edd09a42-14b6-4161-ba2a-82c4cf4f5983/volumes" Jan 30 08:39:18 crc kubenswrapper[4870]: I0130 08:39:18.075118 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:39:18 crc kubenswrapper[4870]: E0130 08:39:18.075838 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:39:31 crc kubenswrapper[4870]: I0130 08:39:31.075579 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:39:31 crc kubenswrapper[4870]: E0130 08:39:31.076464 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:39:35 crc kubenswrapper[4870]: I0130 08:39:35.003775 4870 generic.go:334] "Generic (PLEG): container finished" podID="9bef3cd3-94ab-486e-91de-c0ede57769d8" containerID="36acfc5bf960402457c7f8cc9040b5a8f64a76024ba8309a1355304bfe83c1d9" exitCode=0 Jan 30 08:39:35 crc kubenswrapper[4870]: I0130 08:39:35.003904 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" event={"ID":"9bef3cd3-94ab-486e-91de-c0ede57769d8","Type":"ContainerDied","Data":"36acfc5bf960402457c7f8cc9040b5a8f64a76024ba8309a1355304bfe83c1d9"} Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.439343 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.506790 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-inventory\") pod \"9bef3cd3-94ab-486e-91de-c0ede57769d8\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.506859 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtqt9\" (UniqueName: \"kubernetes.io/projected/9bef3cd3-94ab-486e-91de-c0ede57769d8-kube-api-access-gtqt9\") pod \"9bef3cd3-94ab-486e-91de-c0ede57769d8\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.506916 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-ssh-key-openstack-edpm-ipam\") pod \"9bef3cd3-94ab-486e-91de-c0ede57769d8\" (UID: \"9bef3cd3-94ab-486e-91de-c0ede57769d8\") " Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.513700 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bef3cd3-94ab-486e-91de-c0ede57769d8-kube-api-access-gtqt9" (OuterVolumeSpecName: "kube-api-access-gtqt9") pod "9bef3cd3-94ab-486e-91de-c0ede57769d8" (UID: "9bef3cd3-94ab-486e-91de-c0ede57769d8"). InnerVolumeSpecName "kube-api-access-gtqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.542829 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9bef3cd3-94ab-486e-91de-c0ede57769d8" (UID: "9bef3cd3-94ab-486e-91de-c0ede57769d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.547075 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-inventory" (OuterVolumeSpecName: "inventory") pod "9bef3cd3-94ab-486e-91de-c0ede57769d8" (UID: "9bef3cd3-94ab-486e-91de-c0ede57769d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.609080 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.609280 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtqt9\" (UniqueName: \"kubernetes.io/projected/9bef3cd3-94ab-486e-91de-c0ede57769d8-kube-api-access-gtqt9\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.609345 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bef3cd3-94ab-486e-91de-c0ede57769d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.612107 4870 scope.go:117] "RemoveContainer" containerID="3e1279140ba8261786354ed9fbacfe2a2a43a2b8decaba7ca7c7b15754ed7ff9" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.650536 4870 scope.go:117] "RemoveContainer" containerID="423e4f8207599a836d08eca85be2c21680c69e731edaed6ac9d59c605d325bfb" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.800911 4870 scope.go:117] "RemoveContainer" containerID="c723dc182803022ba9e618ac6407cbccb617a7c5a0a43457386f580c7a154614" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.845681 4870 scope.go:117] "RemoveContainer" containerID="ac1cfe0654d6d9f59d0d7bba982a578597204c7a7dcbab5f91122bf878031c77" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.873007 4870 scope.go:117] "RemoveContainer" containerID="51fd04d1413a7bb8dd1010fcf50ab478d7211a73c87542e70aaae3ce82cc9053" Jan 30 08:39:36 crc kubenswrapper[4870]: I0130 08:39:36.915668 4870 scope.go:117] "RemoveContainer" containerID="85f1049088e388e69d6da33f4eab9143943bc4d4ba2179d9093657152d474310" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.026730 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" event={"ID":"9bef3cd3-94ab-486e-91de-c0ede57769d8","Type":"ContainerDied","Data":"626d115ac205077a01bbe8f25312875b05af3e1a0b1ae6dc536bf8f8aea4f69b"} Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.026778 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="626d115ac205077a01bbe8f25312875b05af3e1a0b1ae6dc536bf8f8aea4f69b" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.026839 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-p67q7" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.111291 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh"] Jan 30 08:39:37 crc kubenswrapper[4870]: E0130 08:39:37.111907 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bef3cd3-94ab-486e-91de-c0ede57769d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.111935 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bef3cd3-94ab-486e-91de-c0ede57769d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.112177 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bef3cd3-94ab-486e-91de-c0ede57769d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.113066 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.114847 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.115413 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.115503 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.115584 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.140678 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh"] Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.230780 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplnt\" (UniqueName: \"kubernetes.io/projected/1eea19c9-87be-4160-8c11-c7ecd13cf088-kube-api-access-xplnt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.231037 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.231261 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.333097 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplnt\" (UniqueName: \"kubernetes.io/projected/1eea19c9-87be-4160-8c11-c7ecd13cf088-kube-api-access-xplnt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.333263 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.333509 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.338838 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.338665 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.349918 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplnt\" (UniqueName: \"kubernetes.io/projected/1eea19c9-87be-4160-8c11-c7ecd13cf088-kube-api-access-xplnt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-948rh\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:37 crc kubenswrapper[4870]: I0130 08:39:37.434843 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:39:38 crc kubenswrapper[4870]: W0130 08:39:38.054158 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eea19c9_87be_4160_8c11_c7ecd13cf088.slice/crio-1770fb330557b332b33d9381d3b29b9bcd4d66980b06a1acc3c5507ce1e08776 WatchSource:0}: Error finding container 1770fb330557b332b33d9381d3b29b9bcd4d66980b06a1acc3c5507ce1e08776: Status 404 returned error can't find the container with id 1770fb330557b332b33d9381d3b29b9bcd4d66980b06a1acc3c5507ce1e08776 Jan 30 08:39:38 crc kubenswrapper[4870]: I0130 08:39:38.072697 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh"] Jan 30 08:39:39 crc kubenswrapper[4870]: I0130 08:39:39.053416 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" event={"ID":"1eea19c9-87be-4160-8c11-c7ecd13cf088","Type":"ContainerStarted","Data":"1770fb330557b332b33d9381d3b29b9bcd4d66980b06a1acc3c5507ce1e08776"} Jan 30 08:39:41 crc kubenswrapper[4870]: I0130 08:39:41.077493 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" event={"ID":"1eea19c9-87be-4160-8c11-c7ecd13cf088","Type":"ContainerStarted","Data":"7932ebc87656bf0ae01e190d06d498a15ca8251a6e3095a4f845c9e27a4ab873"} Jan 30 08:39:41 crc kubenswrapper[4870]: I0130 08:39:41.096539 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" podStartSLOduration=2.297782708 podStartE2EDuration="4.096518885s" podCreationTimestamp="2026-01-30 08:39:37 +0000 UTC" firstStartedPulling="2026-01-30 08:39:38.056584454 +0000 UTC m=+1816.752131563" lastFinishedPulling="2026-01-30 08:39:39.855320631 +0000 UTC m=+1818.550867740" observedRunningTime="2026-01-30 08:39:41.093699706 +0000 UTC m=+1819.789246815" watchObservedRunningTime="2026-01-30 08:39:41.096518885 +0000 UTC m=+1819.792066004" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.218792 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-54dp6"] Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.221241 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.229384 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dp6"] Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.343113 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-catalog-content\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.343201 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-utilities\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.343420 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wdd5\" (UniqueName: \"kubernetes.io/projected/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-kube-api-access-4wdd5\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.446114 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-catalog-content\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.446583 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-utilities\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.446907 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wdd5\" (UniqueName: \"kubernetes.io/projected/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-kube-api-access-4wdd5\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.446919 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-catalog-content\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.447291 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-utilities\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.472849 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wdd5\" (UniqueName: \"kubernetes.io/projected/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-kube-api-access-4wdd5\") pod \"redhat-marketplace-54dp6\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:42 crc kubenswrapper[4870]: I0130 08:39:42.540636 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:43 crc kubenswrapper[4870]: I0130 08:39:43.003452 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dp6"] Jan 30 08:39:43 crc kubenswrapper[4870]: I0130 08:39:43.108579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dp6" event={"ID":"0419d51d-7b10-4e0f-b6ba-196fafeb8df2","Type":"ContainerStarted","Data":"5d13e2421f8c4e836dce39d4509ef089cfed99d261e72152a996ec22cfbe9f95"} Jan 30 08:39:44 crc kubenswrapper[4870]: I0130 08:39:44.123216 4870 generic.go:334] "Generic (PLEG): container finished" podID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerID="a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d" exitCode=0 Jan 30 08:39:44 crc kubenswrapper[4870]: I0130 08:39:44.123276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dp6" event={"ID":"0419d51d-7b10-4e0f-b6ba-196fafeb8df2","Type":"ContainerDied","Data":"a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d"} Jan 30 08:39:46 crc kubenswrapper[4870]: I0130 08:39:46.075315 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:39:46 crc kubenswrapper[4870]: E0130 08:39:46.076155 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:39:46 crc kubenswrapper[4870]: I0130 08:39:46.145204 4870 generic.go:334] "Generic (PLEG): container finished" podID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerID="ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8" exitCode=0 Jan 30 08:39:46 crc kubenswrapper[4870]: I0130 08:39:46.145244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dp6" event={"ID":"0419d51d-7b10-4e0f-b6ba-196fafeb8df2","Type":"ContainerDied","Data":"ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8"} Jan 30 08:39:47 crc kubenswrapper[4870]: I0130 08:39:47.156085 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dp6" event={"ID":"0419d51d-7b10-4e0f-b6ba-196fafeb8df2","Type":"ContainerStarted","Data":"80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8"} Jan 30 08:39:47 crc kubenswrapper[4870]: I0130 08:39:47.181663 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-54dp6" podStartSLOduration=2.669611535 podStartE2EDuration="5.181645517s" podCreationTimestamp="2026-01-30 08:39:42 +0000 UTC" firstStartedPulling="2026-01-30 08:39:44.125951545 +0000 UTC m=+1822.821498654" lastFinishedPulling="2026-01-30 08:39:46.637985527 +0000 UTC m=+1825.333532636" observedRunningTime="2026-01-30 08:39:47.178242331 +0000 UTC m=+1825.873789440" watchObservedRunningTime="2026-01-30 08:39:47.181645517 +0000 UTC m=+1825.877192626" Jan 30 08:39:52 crc kubenswrapper[4870]: I0130 08:39:52.541044 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:52 crc kubenswrapper[4870]: I0130 08:39:52.541549 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:52 crc kubenswrapper[4870]: I0130 08:39:52.595788 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:53 crc kubenswrapper[4870]: I0130 08:39:53.249045 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:53 crc kubenswrapper[4870]: I0130 08:39:53.300774 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dp6"] Jan 30 08:39:55 crc kubenswrapper[4870]: I0130 08:39:55.224406 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-54dp6" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="registry-server" containerID="cri-o://80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8" gracePeriod=2 Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.202611 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.234375 4870 generic.go:334] "Generic (PLEG): container finished" podID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerID="80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8" exitCode=0 Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.234418 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dp6" event={"ID":"0419d51d-7b10-4e0f-b6ba-196fafeb8df2","Type":"ContainerDied","Data":"80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8"} Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.234443 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54dp6" event={"ID":"0419d51d-7b10-4e0f-b6ba-196fafeb8df2","Type":"ContainerDied","Data":"5d13e2421f8c4e836dce39d4509ef089cfed99d261e72152a996ec22cfbe9f95"} Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.234460 4870 scope.go:117] "RemoveContainer" containerID="80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.234582 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54dp6" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.259995 4870 scope.go:117] "RemoveContainer" containerID="ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.283728 4870 scope.go:117] "RemoveContainer" containerID="a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.318464 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-catalog-content\") pod \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.318511 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-utilities\") pod \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.318639 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wdd5\" (UniqueName: \"kubernetes.io/projected/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-kube-api-access-4wdd5\") pod \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\" (UID: \"0419d51d-7b10-4e0f-b6ba-196fafeb8df2\") " Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.319765 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-utilities" (OuterVolumeSpecName: "utilities") pod "0419d51d-7b10-4e0f-b6ba-196fafeb8df2" (UID: "0419d51d-7b10-4e0f-b6ba-196fafeb8df2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.327743 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-kube-api-access-4wdd5" (OuterVolumeSpecName: "kube-api-access-4wdd5") pod "0419d51d-7b10-4e0f-b6ba-196fafeb8df2" (UID: "0419d51d-7b10-4e0f-b6ba-196fafeb8df2"). InnerVolumeSpecName "kube-api-access-4wdd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.346081 4870 scope.go:117] "RemoveContainer" containerID="80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8" Jan 30 08:39:56 crc kubenswrapper[4870]: E0130 08:39:56.346701 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8\": container with ID starting with 80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8 not found: ID does not exist" containerID="80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.346751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0419d51d-7b10-4e0f-b6ba-196fafeb8df2" (UID: "0419d51d-7b10-4e0f-b6ba-196fafeb8df2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.346759 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8"} err="failed to get container status \"80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8\": rpc error: code = NotFound desc = could not find container \"80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8\": container with ID starting with 80b1b36e8da8d957d286029da9530e105f323a31e33617a692d7d80c253a4da8 not found: ID does not exist" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.346795 4870 scope.go:117] "RemoveContainer" containerID="ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8" Jan 30 08:39:56 crc kubenswrapper[4870]: E0130 08:39:56.347237 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8\": container with ID starting with ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8 not found: ID does not exist" containerID="ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.347278 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8"} err="failed to get container status \"ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8\": rpc error: code = NotFound desc = could not find container \"ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8\": container with ID starting with ae4aae4d2815ded1520810f050e6a1cae92f3b27778430a63762bed46deb47e8 not found: ID does not exist" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.347292 4870 scope.go:117] "RemoveContainer" containerID="a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d" Jan 30 08:39:56 crc kubenswrapper[4870]: E0130 08:39:56.347565 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d\": container with ID starting with a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d not found: ID does not exist" containerID="a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.347587 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d"} err="failed to get container status \"a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d\": rpc error: code = NotFound desc = could not find container \"a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d\": container with ID starting with a125d3f7cbe7f9ef0dc61cf3a2660410f3c17fb97c49bd74531df00e9d460f3d not found: ID does not exist" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.421701 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wdd5\" (UniqueName: \"kubernetes.io/projected/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-kube-api-access-4wdd5\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.421741 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.421755 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0419d51d-7b10-4e0f-b6ba-196fafeb8df2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.570063 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dp6"] Jan 30 08:39:56 crc kubenswrapper[4870]: I0130 08:39:56.578532 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-54dp6"] Jan 30 08:39:58 crc kubenswrapper[4870]: I0130 08:39:58.090269 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" path="/var/lib/kubelet/pods/0419d51d-7b10-4e0f-b6ba-196fafeb8df2/volumes" Jan 30 08:40:00 crc kubenswrapper[4870]: I0130 08:40:00.075601 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:40:00 crc kubenswrapper[4870]: E0130 08:40:00.076435 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:40:01 crc kubenswrapper[4870]: I0130 08:40:01.074765 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mz9qm"] Jan 30 08:40:01 crc kubenswrapper[4870]: I0130 08:40:01.088988 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-rxztf"] Jan 30 08:40:01 crc kubenswrapper[4870]: I0130 08:40:01.096969 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-p626s"] Jan 30 08:40:01 crc kubenswrapper[4870]: I0130 08:40:01.105131 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-p626s"] Jan 30 08:40:01 crc kubenswrapper[4870]: I0130 08:40:01.113212 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mz9qm"] Jan 30 08:40:01 crc kubenswrapper[4870]: I0130 08:40:01.121434 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-rxztf"] Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.039094 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89bf-account-create-update-s9p8t"] Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.053593 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7c52-account-create-update-bc4lx"] Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.065268 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-89bf-account-create-update-s9p8t"] Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.091659 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0467c513-d47e-4251-a042-74a1f0a3ba8e" path="/var/lib/kubelet/pods/0467c513-d47e-4251-a042-74a1f0a3ba8e/volumes" Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.092474 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd82862-2bef-4d86-be4e-38f670a252bd" path="/var/lib/kubelet/pods/6cd82862-2bef-4d86-be4e-38f670a252bd/volumes" Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.093246 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82e1e2b-e78e-4b8f-8303-2ea82b24bf28" path="/var/lib/kubelet/pods/b82e1e2b-e78e-4b8f-8303-2ea82b24bf28/volumes" Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.094194 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981" path="/var/lib/kubelet/pods/f41a5ec0-d6c7-47ab-b69f-c6c2a8bc4981/volumes" Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.095567 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7c52-account-create-update-bc4lx"] Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.095608 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9204-account-create-update-pczk5"] Jan 30 08:40:02 crc kubenswrapper[4870]: I0130 08:40:02.095624 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9204-account-create-update-pczk5"] Jan 30 08:40:04 crc kubenswrapper[4870]: I0130 08:40:04.085077 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa80552-6dc1-43b4-ba32-8fca58595c32" path="/var/lib/kubelet/pods/9aa80552-6dc1-43b4-ba32-8fca58595c32/volumes" Jan 30 08:40:04 crc kubenswrapper[4870]: I0130 08:40:04.085860 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf298cb-af81-4272-aacd-2d1342eab106" path="/var/lib/kubelet/pods/adf298cb-af81-4272-aacd-2d1342eab106/volumes" Jan 30 08:40:15 crc kubenswrapper[4870]: I0130 08:40:15.075932 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:40:15 crc kubenswrapper[4870]: E0130 08:40:15.077307 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:40:29 crc kubenswrapper[4870]: I0130 08:40:29.075381 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:40:29 crc kubenswrapper[4870]: I0130 08:40:29.547969 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2"} Jan 30 08:40:34 crc kubenswrapper[4870]: I0130 08:40:34.039295 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gs8vz"] Jan 30 08:40:34 crc kubenswrapper[4870]: I0130 08:40:34.049144 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gs8vz"] Jan 30 08:40:34 crc kubenswrapper[4870]: I0130 08:40:34.115180 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463149ce-687b-479c-ab61-030371f69acb" path="/var/lib/kubelet/pods/463149ce-687b-479c-ab61-030371f69acb/volumes" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.035132 4870 scope.go:117] "RemoveContainer" containerID="b3747f1a7b0dcf93ef3e9971ceb218b892bb2531c608e6a07760d677c25d7633" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.111815 4870 scope.go:117] "RemoveContainer" containerID="23a36de41e3e5413c9d4a8e53e9d9062761ceb2d5ea6dc50cc6414dd812317b7" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.156990 4870 scope.go:117] "RemoveContainer" containerID="ebc3f13bad52a8c63665a782767852e60e31712851103659b19d6a855c623701" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.238166 4870 scope.go:117] "RemoveContainer" containerID="e7588860011aa90e39e44c8b147a927646ababcde64482fc80c549dc156bfaf7" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.264435 4870 scope.go:117] "RemoveContainer" containerID="10d8adf976aee141ddedf0f0b7d4a560074ff0040c0d225d7fde8dac560cebcd" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.312571 4870 scope.go:117] "RemoveContainer" containerID="ad4987bcd683a82b2ef435208c93f9f9d4904561809fe23aa9fd681008a558c6" Jan 30 08:40:37 crc kubenswrapper[4870]: I0130 08:40:37.352637 4870 scope.go:117] "RemoveContainer" containerID="046d4010b0f900fe2cbd28328fdfa8554886e3c18049908c92ba7d45ff824b80" Jan 30 08:40:53 crc kubenswrapper[4870]: I0130 08:40:53.051321 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vrk8x"] Jan 30 08:40:53 crc kubenswrapper[4870]: I0130 08:40:53.065235 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vrk8x"] Jan 30 08:40:54 crc kubenswrapper[4870]: I0130 08:40:54.085016 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5" path="/var/lib/kubelet/pods/8e3f03b1-ce9f-4f1d-8bb9-eecb941268c5/volumes" Jan 30 08:40:57 crc kubenswrapper[4870]: I0130 08:40:57.870379 4870 generic.go:334] "Generic (PLEG): container finished" podID="1eea19c9-87be-4160-8c11-c7ecd13cf088" containerID="7932ebc87656bf0ae01e190d06d498a15ca8251a6e3095a4f845c9e27a4ab873" exitCode=0 Jan 30 08:40:57 crc kubenswrapper[4870]: I0130 08:40:57.870473 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" event={"ID":"1eea19c9-87be-4160-8c11-c7ecd13cf088","Type":"ContainerDied","Data":"7932ebc87656bf0ae01e190d06d498a15ca8251a6e3095a4f845c9e27a4ab873"} Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.279605 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.353253 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-inventory\") pod \"1eea19c9-87be-4160-8c11-c7ecd13cf088\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.353477 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xplnt\" (UniqueName: \"kubernetes.io/projected/1eea19c9-87be-4160-8c11-c7ecd13cf088-kube-api-access-xplnt\") pod \"1eea19c9-87be-4160-8c11-c7ecd13cf088\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.353589 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-ssh-key-openstack-edpm-ipam\") pod \"1eea19c9-87be-4160-8c11-c7ecd13cf088\" (UID: \"1eea19c9-87be-4160-8c11-c7ecd13cf088\") " Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.359537 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eea19c9-87be-4160-8c11-c7ecd13cf088-kube-api-access-xplnt" (OuterVolumeSpecName: "kube-api-access-xplnt") pod "1eea19c9-87be-4160-8c11-c7ecd13cf088" (UID: "1eea19c9-87be-4160-8c11-c7ecd13cf088"). InnerVolumeSpecName "kube-api-access-xplnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.385154 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-inventory" (OuterVolumeSpecName: "inventory") pod "1eea19c9-87be-4160-8c11-c7ecd13cf088" (UID: "1eea19c9-87be-4160-8c11-c7ecd13cf088"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.385904 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1eea19c9-87be-4160-8c11-c7ecd13cf088" (UID: "1eea19c9-87be-4160-8c11-c7ecd13cf088"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.456301 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xplnt\" (UniqueName: \"kubernetes.io/projected/1eea19c9-87be-4160-8c11-c7ecd13cf088-kube-api-access-xplnt\") on node \"crc\" DevicePath \"\"" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.456349 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.456363 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1eea19c9-87be-4160-8c11-c7ecd13cf088-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.891035 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" event={"ID":"1eea19c9-87be-4160-8c11-c7ecd13cf088","Type":"ContainerDied","Data":"1770fb330557b332b33d9381d3b29b9bcd4d66980b06a1acc3c5507ce1e08776"} Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.891072 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1770fb330557b332b33d9381d3b29b9bcd4d66980b06a1acc3c5507ce1e08776" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.891084 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-948rh" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.978348 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh"] Jan 30 08:40:59 crc kubenswrapper[4870]: E0130 08:40:59.978703 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="extract-content" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.978718 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="extract-content" Jan 30 08:40:59 crc kubenswrapper[4870]: E0130 08:40:59.978736 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eea19c9-87be-4160-8c11-c7ecd13cf088" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.978744 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eea19c9-87be-4160-8c11-c7ecd13cf088" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 08:40:59 crc kubenswrapper[4870]: E0130 08:40:59.978756 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="registry-server" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.978762 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="registry-server" Jan 30 08:40:59 crc kubenswrapper[4870]: E0130 08:40:59.978778 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="extract-utilities" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.978783 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="extract-utilities" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.979026 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eea19c9-87be-4160-8c11-c7ecd13cf088" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.979053 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0419d51d-7b10-4e0f-b6ba-196fafeb8df2" containerName="registry-server" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.979656 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.982793 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.982910 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.983022 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.983200 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:40:59 crc kubenswrapper[4870]: I0130 08:40:59.991044 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh"] Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.068247 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.068381 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjj6\" (UniqueName: \"kubernetes.io/projected/2f708fca-b1a9-432a-acbe-df74341208d2-kube-api-access-nfjj6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.068555 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.170082 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.170218 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfjj6\" (UniqueName: \"kubernetes.io/projected/2f708fca-b1a9-432a-acbe-df74341208d2-kube-api-access-nfjj6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.170293 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.174419 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.174485 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.187115 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfjj6\" (UniqueName: \"kubernetes.io/projected/2f708fca-b1a9-432a-acbe-df74341208d2-kube-api-access-nfjj6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.294231 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.876095 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh"] Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.882016 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:41:00 crc kubenswrapper[4870]: I0130 08:41:00.919190 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" event={"ID":"2f708fca-b1a9-432a-acbe-df74341208d2","Type":"ContainerStarted","Data":"025650171f5d69cc5598b7176854c53e1bb04509791618e6a95971b09eded094"} Jan 30 08:41:02 crc kubenswrapper[4870]: I0130 08:41:02.038274 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m882v"] Jan 30 08:41:02 crc kubenswrapper[4870]: I0130 08:41:02.044447 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m882v"] Jan 30 08:41:02 crc kubenswrapper[4870]: I0130 08:41:02.087958 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7d1e35-e72c-4a05-8a4a-89647f93a26c" path="/var/lib/kubelet/pods/df7d1e35-e72c-4a05-8a4a-89647f93a26c/volumes" Jan 30 08:41:04 crc kubenswrapper[4870]: I0130 08:41:04.958339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" event={"ID":"2f708fca-b1a9-432a-acbe-df74341208d2","Type":"ContainerStarted","Data":"4c8c236a6e47501efdc59b70f477bf862ee67ea67d633a312fce41dcf1117d23"} Jan 30 08:41:06 crc kubenswrapper[4870]: I0130 08:41:06.004833 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" podStartSLOduration=4.20319988 podStartE2EDuration="7.004806531s" podCreationTimestamp="2026-01-30 08:40:59 +0000 UTC" firstStartedPulling="2026-01-30 08:41:00.88178233 +0000 UTC m=+1899.577329439" lastFinishedPulling="2026-01-30 08:41:03.683388981 +0000 UTC m=+1902.378936090" observedRunningTime="2026-01-30 08:41:05.989534455 +0000 UTC m=+1904.685081604" watchObservedRunningTime="2026-01-30 08:41:06.004806531 +0000 UTC m=+1904.700353680" Jan 30 08:41:10 crc kubenswrapper[4870]: I0130 08:41:10.012146 4870 generic.go:334] "Generic (PLEG): container finished" podID="2f708fca-b1a9-432a-acbe-df74341208d2" containerID="4c8c236a6e47501efdc59b70f477bf862ee67ea67d633a312fce41dcf1117d23" exitCode=0 Jan 30 08:41:10 crc kubenswrapper[4870]: I0130 08:41:10.012248 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" event={"ID":"2f708fca-b1a9-432a-acbe-df74341208d2","Type":"ContainerDied","Data":"4c8c236a6e47501efdc59b70f477bf862ee67ea67d633a312fce41dcf1117d23"} Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.438438 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.526183 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-inventory\") pod \"2f708fca-b1a9-432a-acbe-df74341208d2\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.526269 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-ssh-key-openstack-edpm-ipam\") pod \"2f708fca-b1a9-432a-acbe-df74341208d2\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.526393 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfjj6\" (UniqueName: \"kubernetes.io/projected/2f708fca-b1a9-432a-acbe-df74341208d2-kube-api-access-nfjj6\") pod \"2f708fca-b1a9-432a-acbe-df74341208d2\" (UID: \"2f708fca-b1a9-432a-acbe-df74341208d2\") " Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.533175 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f708fca-b1a9-432a-acbe-df74341208d2-kube-api-access-nfjj6" (OuterVolumeSpecName: "kube-api-access-nfjj6") pod "2f708fca-b1a9-432a-acbe-df74341208d2" (UID: "2f708fca-b1a9-432a-acbe-df74341208d2"). InnerVolumeSpecName "kube-api-access-nfjj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.555063 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f708fca-b1a9-432a-acbe-df74341208d2" (UID: "2f708fca-b1a9-432a-acbe-df74341208d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.555099 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-inventory" (OuterVolumeSpecName: "inventory") pod "2f708fca-b1a9-432a-acbe-df74341208d2" (UID: "2f708fca-b1a9-432a-acbe-df74341208d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.629029 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfjj6\" (UniqueName: \"kubernetes.io/projected/2f708fca-b1a9-432a-acbe-df74341208d2-kube-api-access-nfjj6\") on node \"crc\" DevicePath \"\"" Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.629062 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:41:11 crc kubenswrapper[4870]: I0130 08:41:11.629072 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f708fca-b1a9-432a-acbe-df74341208d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.030740 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" event={"ID":"2f708fca-b1a9-432a-acbe-df74341208d2","Type":"ContainerDied","Data":"025650171f5d69cc5598b7176854c53e1bb04509791618e6a95971b09eded094"} Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.030781 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="025650171f5d69cc5598b7176854c53e1bb04509791618e6a95971b09eded094" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.030797 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.125995 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm"] Jan 30 08:41:12 crc kubenswrapper[4870]: E0130 08:41:12.126570 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f708fca-b1a9-432a-acbe-df74341208d2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.126593 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f708fca-b1a9-432a-acbe-df74341208d2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.126829 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f708fca-b1a9-432a-acbe-df74341208d2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.127674 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.129668 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.129973 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.130260 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.130377 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.138814 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm"] Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.139927 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.139981 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.140009 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlph\" (UniqueName: \"kubernetes.io/projected/82fb960a-335c-4d35-baed-122cd1cb515d-kube-api-access-7rlph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.241053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.241362 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.241425 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlph\" (UniqueName: \"kubernetes.io/projected/82fb960a-335c-4d35-baed-122cd1cb515d-kube-api-access-7rlph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.245257 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.250312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.258314 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlph\" (UniqueName: \"kubernetes.io/projected/82fb960a-335c-4d35-baed-122cd1cb515d-kube-api-access-7rlph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4hkm\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.444958 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:12 crc kubenswrapper[4870]: I0130 08:41:12.984533 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm"] Jan 30 08:41:13 crc kubenswrapper[4870]: I0130 08:41:13.044661 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" event={"ID":"82fb960a-335c-4d35-baed-122cd1cb515d","Type":"ContainerStarted","Data":"ac21d1032be1f0066a16137837559fa556ce16b8cc044603da108adf8f506076"} Jan 30 08:41:15 crc kubenswrapper[4870]: I0130 08:41:15.059613 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" event={"ID":"82fb960a-335c-4d35-baed-122cd1cb515d","Type":"ContainerStarted","Data":"96fa2735a8adb0378e99eb607ea919b72d94c98634fe9c945d63764c78f1d5a5"} Jan 30 08:41:15 crc kubenswrapper[4870]: I0130 08:41:15.077829 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" podStartSLOduration=2.364217616 podStartE2EDuration="3.07781136s" podCreationTimestamp="2026-01-30 08:41:12 +0000 UTC" firstStartedPulling="2026-01-30 08:41:12.997840155 +0000 UTC m=+1911.693387264" lastFinishedPulling="2026-01-30 08:41:13.711433899 +0000 UTC m=+1912.406981008" observedRunningTime="2026-01-30 08:41:15.072169234 +0000 UTC m=+1913.767716343" watchObservedRunningTime="2026-01-30 08:41:15.07781136 +0000 UTC m=+1913.773358469" Jan 30 08:41:37 crc kubenswrapper[4870]: I0130 08:41:37.558986 4870 scope.go:117] "RemoveContainer" containerID="d505e04dc454937c02de4ea80fb1b30e9ec281deb651bdc207ab606295f95619" Jan 30 08:41:37 crc kubenswrapper[4870]: I0130 08:41:37.629901 4870 scope.go:117] "RemoveContainer" containerID="8042d48e5c92127670e964e37628c795eb4a833864e07f5be3a23644c40ab2aa" Jan 30 08:41:43 crc kubenswrapper[4870]: I0130 08:41:43.052543 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hhwc4"] Jan 30 08:41:43 crc kubenswrapper[4870]: I0130 08:41:43.064772 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hhwc4"] Jan 30 08:41:44 crc kubenswrapper[4870]: I0130 08:41:44.089956 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1dfb454-58dc-4c83-b25e-cabaab6cb747" path="/var/lib/kubelet/pods/c1dfb454-58dc-4c83-b25e-cabaab6cb747/volumes" Jan 30 08:41:53 crc kubenswrapper[4870]: I0130 08:41:53.792470 4870 generic.go:334] "Generic (PLEG): container finished" podID="82fb960a-335c-4d35-baed-122cd1cb515d" containerID="96fa2735a8adb0378e99eb607ea919b72d94c98634fe9c945d63764c78f1d5a5" exitCode=0 Jan 30 08:41:53 crc kubenswrapper[4870]: I0130 08:41:53.793029 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" event={"ID":"82fb960a-335c-4d35-baed-122cd1cb515d","Type":"ContainerDied","Data":"96fa2735a8adb0378e99eb607ea919b72d94c98634fe9c945d63764c78f1d5a5"} Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.262333 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.406833 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-ssh-key-openstack-edpm-ipam\") pod \"82fb960a-335c-4d35-baed-122cd1cb515d\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.406947 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-inventory\") pod \"82fb960a-335c-4d35-baed-122cd1cb515d\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.406985 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlph\" (UniqueName: \"kubernetes.io/projected/82fb960a-335c-4d35-baed-122cd1cb515d-kube-api-access-7rlph\") pod \"82fb960a-335c-4d35-baed-122cd1cb515d\" (UID: \"82fb960a-335c-4d35-baed-122cd1cb515d\") " Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.434126 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fb960a-335c-4d35-baed-122cd1cb515d-kube-api-access-7rlph" (OuterVolumeSpecName: "kube-api-access-7rlph") pod "82fb960a-335c-4d35-baed-122cd1cb515d" (UID: "82fb960a-335c-4d35-baed-122cd1cb515d"). InnerVolumeSpecName "kube-api-access-7rlph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.460076 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-inventory" (OuterVolumeSpecName: "inventory") pod "82fb960a-335c-4d35-baed-122cd1cb515d" (UID: "82fb960a-335c-4d35-baed-122cd1cb515d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.471444 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "82fb960a-335c-4d35-baed-122cd1cb515d" (UID: "82fb960a-335c-4d35-baed-122cd1cb515d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.509664 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.509728 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82fb960a-335c-4d35-baed-122cd1cb515d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.509742 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlph\" (UniqueName: \"kubernetes.io/projected/82fb960a-335c-4d35-baed-122cd1cb515d-kube-api-access-7rlph\") on node \"crc\" DevicePath \"\"" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.822446 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" event={"ID":"82fb960a-335c-4d35-baed-122cd1cb515d","Type":"ContainerDied","Data":"ac21d1032be1f0066a16137837559fa556ce16b8cc044603da108adf8f506076"} Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.822498 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac21d1032be1f0066a16137837559fa556ce16b8cc044603da108adf8f506076" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.822560 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4hkm" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.921419 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n"] Jan 30 08:41:55 crc kubenswrapper[4870]: E0130 08:41:55.921926 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fb960a-335c-4d35-baed-122cd1cb515d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.921948 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fb960a-335c-4d35-baed-122cd1cb515d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.922259 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fb960a-335c-4d35-baed-122cd1cb515d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.922993 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.925362 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.926000 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.929776 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.929943 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:41:55 crc kubenswrapper[4870]: I0130 08:41:55.938869 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n"] Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.121617 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.122118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxkm\" (UniqueName: \"kubernetes.io/projected/f32f4b01-631a-4f4b-8ffb-f0873b819de0-kube-api-access-srxkm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.122414 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.224454 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.224594 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxkm\" (UniqueName: \"kubernetes.io/projected/f32f4b01-631a-4f4b-8ffb-f0873b819de0-kube-api-access-srxkm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.224662 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.229515 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.237097 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.243494 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxkm\" (UniqueName: \"kubernetes.io/projected/f32f4b01-631a-4f4b-8ffb-f0873b819de0-kube-api-access-srxkm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.281599 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:41:56 crc kubenswrapper[4870]: I0130 08:41:56.891282 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n"] Jan 30 08:41:57 crc kubenswrapper[4870]: I0130 08:41:57.840279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" event={"ID":"f32f4b01-631a-4f4b-8ffb-f0873b819de0","Type":"ContainerStarted","Data":"842d2706f5ff57fb0c4252a173f50ad6ed9ad319ed05f4307b1cbd3bb33828b1"} Jan 30 08:41:57 crc kubenswrapper[4870]: I0130 08:41:57.840631 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" event={"ID":"f32f4b01-631a-4f4b-8ffb-f0873b819de0","Type":"ContainerStarted","Data":"8a351807bd85fefefa0570c8fe5bdbb23839b0f7ee2e93949b4dd32f9ca2828e"} Jan 30 08:41:57 crc kubenswrapper[4870]: I0130 08:41:57.866166 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" podStartSLOduration=2.432303346 podStartE2EDuration="2.86613841s" podCreationTimestamp="2026-01-30 08:41:55 +0000 UTC" firstStartedPulling="2026-01-30 08:41:56.900771408 +0000 UTC m=+1955.596318517" lastFinishedPulling="2026-01-30 08:41:57.334606472 +0000 UTC m=+1956.030153581" observedRunningTime="2026-01-30 08:41:57.854860918 +0000 UTC m=+1956.550408037" watchObservedRunningTime="2026-01-30 08:41:57.86613841 +0000 UTC m=+1956.561685519" Jan 30 08:42:37 crc kubenswrapper[4870]: I0130 08:42:37.713161 4870 scope.go:117] "RemoveContainer" containerID="4b78d78b78d21abcc7506de0b24454a50e055736a3c90f711e671ea39c5653ae" Jan 30 08:42:47 crc kubenswrapper[4870]: I0130 08:42:47.286237 4870 generic.go:334] "Generic (PLEG): container finished" podID="f32f4b01-631a-4f4b-8ffb-f0873b819de0" containerID="842d2706f5ff57fb0c4252a173f50ad6ed9ad319ed05f4307b1cbd3bb33828b1" exitCode=0 Jan 30 08:42:47 crc kubenswrapper[4870]: I0130 08:42:47.286335 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" event={"ID":"f32f4b01-631a-4f4b-8ffb-f0873b819de0","Type":"ContainerDied","Data":"842d2706f5ff57fb0c4252a173f50ad6ed9ad319ed05f4307b1cbd3bb33828b1"} Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.742171 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.789550 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srxkm\" (UniqueName: \"kubernetes.io/projected/f32f4b01-631a-4f4b-8ffb-f0873b819de0-kube-api-access-srxkm\") pod \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.790239 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-inventory\") pod \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.790512 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-ssh-key-openstack-edpm-ipam\") pod \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\" (UID: \"f32f4b01-631a-4f4b-8ffb-f0873b819de0\") " Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.797285 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32f4b01-631a-4f4b-8ffb-f0873b819de0-kube-api-access-srxkm" (OuterVolumeSpecName: "kube-api-access-srxkm") pod "f32f4b01-631a-4f4b-8ffb-f0873b819de0" (UID: "f32f4b01-631a-4f4b-8ffb-f0873b819de0"). InnerVolumeSpecName "kube-api-access-srxkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.820414 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-inventory" (OuterVolumeSpecName: "inventory") pod "f32f4b01-631a-4f4b-8ffb-f0873b819de0" (UID: "f32f4b01-631a-4f4b-8ffb-f0873b819de0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.825439 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f32f4b01-631a-4f4b-8ffb-f0873b819de0" (UID: "f32f4b01-631a-4f4b-8ffb-f0873b819de0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.895659 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srxkm\" (UniqueName: \"kubernetes.io/projected/f32f4b01-631a-4f4b-8ffb-f0873b819de0-kube-api-access-srxkm\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.895719 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:48 crc kubenswrapper[4870]: I0130 08:42:48.895735 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f32f4b01-631a-4f4b-8ffb-f0873b819de0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.317927 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" event={"ID":"f32f4b01-631a-4f4b-8ffb-f0873b819de0","Type":"ContainerDied","Data":"8a351807bd85fefefa0570c8fe5bdbb23839b0f7ee2e93949b4dd32f9ca2828e"} Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.317968 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a351807bd85fefefa0570c8fe5bdbb23839b0f7ee2e93949b4dd32f9ca2828e" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.318005 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.399337 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j8w7g"] Jan 30 08:42:49 crc kubenswrapper[4870]: E0130 08:42:49.399892 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32f4b01-631a-4f4b-8ffb-f0873b819de0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.399913 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32f4b01-631a-4f4b-8ffb-f0873b819de0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.400104 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32f4b01-631a-4f4b-8ffb-f0873b819de0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.400819 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.403409 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.403609 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.403745 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.403866 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.408433 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j8w7g"] Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.506404 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.506523 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9zp\" (UniqueName: \"kubernetes.io/projected/07db545c-df21-4f19-ad37-3071248b8672-kube-api-access-kd9zp\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.507560 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.609060 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.609194 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.609291 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd9zp\" (UniqueName: \"kubernetes.io/projected/07db545c-df21-4f19-ad37-3071248b8672-kube-api-access-kd9zp\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.612656 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.614284 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.632856 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd9zp\" (UniqueName: \"kubernetes.io/projected/07db545c-df21-4f19-ad37-3071248b8672-kube-api-access-kd9zp\") pod \"ssh-known-hosts-edpm-deployment-j8w7g\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:49 crc kubenswrapper[4870]: I0130 08:42:49.754158 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:50 crc kubenswrapper[4870]: I0130 08:42:50.425852 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j8w7g"] Jan 30 08:42:51 crc kubenswrapper[4870]: I0130 08:42:51.338266 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" event={"ID":"07db545c-df21-4f19-ad37-3071248b8672","Type":"ContainerStarted","Data":"99572818f1bad16bed8a0aa758dc16a90117312700653b7060fa549cf2294c0d"} Jan 30 08:42:51 crc kubenswrapper[4870]: I0130 08:42:51.338672 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" event={"ID":"07db545c-df21-4f19-ad37-3071248b8672","Type":"ContainerStarted","Data":"d0704d1a7324fb19affb05f629070ba3f4a83927946c3dbdffc8fda4be173d38"} Jan 30 08:42:55 crc kubenswrapper[4870]: I0130 08:42:55.249800 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:42:55 crc kubenswrapper[4870]: I0130 08:42:55.250117 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:42:58 crc kubenswrapper[4870]: I0130 08:42:58.395097 4870 generic.go:334] "Generic (PLEG): container finished" podID="07db545c-df21-4f19-ad37-3071248b8672" containerID="99572818f1bad16bed8a0aa758dc16a90117312700653b7060fa549cf2294c0d" exitCode=0 Jan 30 08:42:58 crc kubenswrapper[4870]: I0130 08:42:58.395148 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" event={"ID":"07db545c-df21-4f19-ad37-3071248b8672","Type":"ContainerDied","Data":"99572818f1bad16bed8a0aa758dc16a90117312700653b7060fa549cf2294c0d"} Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.894993 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.920578 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd9zp\" (UniqueName: \"kubernetes.io/projected/07db545c-df21-4f19-ad37-3071248b8672-kube-api-access-kd9zp\") pod \"07db545c-df21-4f19-ad37-3071248b8672\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.920645 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-inventory-0\") pod \"07db545c-df21-4f19-ad37-3071248b8672\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.920710 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-ssh-key-openstack-edpm-ipam\") pod \"07db545c-df21-4f19-ad37-3071248b8672\" (UID: \"07db545c-df21-4f19-ad37-3071248b8672\") " Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.926292 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07db545c-df21-4f19-ad37-3071248b8672-kube-api-access-kd9zp" (OuterVolumeSpecName: "kube-api-access-kd9zp") pod "07db545c-df21-4f19-ad37-3071248b8672" (UID: "07db545c-df21-4f19-ad37-3071248b8672"). InnerVolumeSpecName "kube-api-access-kd9zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.956073 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "07db545c-df21-4f19-ad37-3071248b8672" (UID: "07db545c-df21-4f19-ad37-3071248b8672"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:42:59 crc kubenswrapper[4870]: I0130 08:42:59.959714 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07db545c-df21-4f19-ad37-3071248b8672" (UID: "07db545c-df21-4f19-ad37-3071248b8672"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.023007 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd9zp\" (UniqueName: \"kubernetes.io/projected/07db545c-df21-4f19-ad37-3071248b8672-kube-api-access-kd9zp\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.023050 4870 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.023063 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07db545c-df21-4f19-ad37-3071248b8672-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.415548 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" event={"ID":"07db545c-df21-4f19-ad37-3071248b8672","Type":"ContainerDied","Data":"d0704d1a7324fb19affb05f629070ba3f4a83927946c3dbdffc8fda4be173d38"} Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.415936 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0704d1a7324fb19affb05f629070ba3f4a83927946c3dbdffc8fda4be173d38" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.415592 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j8w7g" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.488662 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw"] Jan 30 08:43:00 crc kubenswrapper[4870]: E0130 08:43:00.489065 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07db545c-df21-4f19-ad37-3071248b8672" containerName="ssh-known-hosts-edpm-deployment" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.489084 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="07db545c-df21-4f19-ad37-3071248b8672" containerName="ssh-known-hosts-edpm-deployment" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.489290 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="07db545c-df21-4f19-ad37-3071248b8672" containerName="ssh-known-hosts-edpm-deployment" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.490055 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.492121 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.492747 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.492795 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.493251 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.504751 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw"] Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.534595 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.534656 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkrm\" (UniqueName: \"kubernetes.io/projected/a685318c-e23f-4192-8ab4-7dbf24880b0d-kube-api-access-gdkrm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.534725 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.636773 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.636817 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkrm\" (UniqueName: \"kubernetes.io/projected/a685318c-e23f-4192-8ab4-7dbf24880b0d-kube-api-access-gdkrm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.636907 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.641985 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.642155 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.655609 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkrm\" (UniqueName: \"kubernetes.io/projected/a685318c-e23f-4192-8ab4-7dbf24880b0d-kube-api-access-gdkrm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl6bw\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:00 crc kubenswrapper[4870]: I0130 08:43:00.815010 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:01 crc kubenswrapper[4870]: I0130 08:43:01.412669 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw"] Jan 30 08:43:01 crc kubenswrapper[4870]: I0130 08:43:01.428910 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" event={"ID":"a685318c-e23f-4192-8ab4-7dbf24880b0d","Type":"ContainerStarted","Data":"98e9af6a5a9f939da73b19d3e83f7092c62a5c0fbe279d5298471495f178146d"} Jan 30 08:43:02 crc kubenswrapper[4870]: I0130 08:43:02.442030 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" event={"ID":"a685318c-e23f-4192-8ab4-7dbf24880b0d","Type":"ContainerStarted","Data":"41f27eabe1099445b59edd44241e4073e4d7dcaf32de5af8d9511c8bb8c53398"} Jan 30 08:43:02 crc kubenswrapper[4870]: I0130 08:43:02.470996 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" podStartSLOduration=1.9879866769999999 podStartE2EDuration="2.470966262s" podCreationTimestamp="2026-01-30 08:43:00 +0000 UTC" firstStartedPulling="2026-01-30 08:43:01.415606497 +0000 UTC m=+2020.111153606" lastFinishedPulling="2026-01-30 08:43:01.898586082 +0000 UTC m=+2020.594133191" observedRunningTime="2026-01-30 08:43:02.462384485 +0000 UTC m=+2021.157931584" watchObservedRunningTime="2026-01-30 08:43:02.470966262 +0000 UTC m=+2021.166513371" Jan 30 08:43:10 crc kubenswrapper[4870]: I0130 08:43:10.529669 4870 generic.go:334] "Generic (PLEG): container finished" podID="a685318c-e23f-4192-8ab4-7dbf24880b0d" containerID="41f27eabe1099445b59edd44241e4073e4d7dcaf32de5af8d9511c8bb8c53398" exitCode=0 Jan 30 08:43:10 crc kubenswrapper[4870]: I0130 08:43:10.529805 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" event={"ID":"a685318c-e23f-4192-8ab4-7dbf24880b0d","Type":"ContainerDied","Data":"41f27eabe1099445b59edd44241e4073e4d7dcaf32de5af8d9511c8bb8c53398"} Jan 30 08:43:11 crc kubenswrapper[4870]: I0130 08:43:11.954240 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.137858 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-ssh-key-openstack-edpm-ipam\") pod \"a685318c-e23f-4192-8ab4-7dbf24880b0d\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.138072 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-inventory\") pod \"a685318c-e23f-4192-8ab4-7dbf24880b0d\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.138119 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkrm\" (UniqueName: \"kubernetes.io/projected/a685318c-e23f-4192-8ab4-7dbf24880b0d-kube-api-access-gdkrm\") pod \"a685318c-e23f-4192-8ab4-7dbf24880b0d\" (UID: \"a685318c-e23f-4192-8ab4-7dbf24880b0d\") " Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.144323 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a685318c-e23f-4192-8ab4-7dbf24880b0d-kube-api-access-gdkrm" (OuterVolumeSpecName: "kube-api-access-gdkrm") pod "a685318c-e23f-4192-8ab4-7dbf24880b0d" (UID: "a685318c-e23f-4192-8ab4-7dbf24880b0d"). InnerVolumeSpecName "kube-api-access-gdkrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.173923 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a685318c-e23f-4192-8ab4-7dbf24880b0d" (UID: "a685318c-e23f-4192-8ab4-7dbf24880b0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.180654 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-inventory" (OuterVolumeSpecName: "inventory") pod "a685318c-e23f-4192-8ab4-7dbf24880b0d" (UID: "a685318c-e23f-4192-8ab4-7dbf24880b0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.241058 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.241096 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkrm\" (UniqueName: \"kubernetes.io/projected/a685318c-e23f-4192-8ab4-7dbf24880b0d-kube-api-access-gdkrm\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.241110 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a685318c-e23f-4192-8ab4-7dbf24880b0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.550160 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" event={"ID":"a685318c-e23f-4192-8ab4-7dbf24880b0d","Type":"ContainerDied","Data":"98e9af6a5a9f939da73b19d3e83f7092c62a5c0fbe279d5298471495f178146d"} Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.550240 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98e9af6a5a9f939da73b19d3e83f7092c62a5c0fbe279d5298471495f178146d" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.550237 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl6bw" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.622352 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29"] Jan 30 08:43:12 crc kubenswrapper[4870]: E0130 08:43:12.622841 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a685318c-e23f-4192-8ab4-7dbf24880b0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.622868 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a685318c-e23f-4192-8ab4-7dbf24880b0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.623144 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a685318c-e23f-4192-8ab4-7dbf24880b0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.624053 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.625810 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.626658 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.626822 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.627007 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.639234 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29"] Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.649401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qr9b\" (UniqueName: \"kubernetes.io/projected/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-kube-api-access-5qr9b\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.649488 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.649541 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.750647 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qr9b\" (UniqueName: \"kubernetes.io/projected/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-kube-api-access-5qr9b\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.750719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.750781 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.755600 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.760422 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.776440 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qr9b\" (UniqueName: \"kubernetes.io/projected/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-kube-api-access-5qr9b\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:12 crc kubenswrapper[4870]: I0130 08:43:12.941108 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:13 crc kubenswrapper[4870]: I0130 08:43:13.508821 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29"] Jan 30 08:43:13 crc kubenswrapper[4870]: I0130 08:43:13.559722 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" event={"ID":"7c9e0c7d-dc65-4862-99da-326bc8d45bfd","Type":"ContainerStarted","Data":"056c785611d96cce791fea952d02746ae7d8237c5a27e5a107a3d1799ffee03a"} Jan 30 08:43:14 crc kubenswrapper[4870]: I0130 08:43:14.570717 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" event={"ID":"7c9e0c7d-dc65-4862-99da-326bc8d45bfd","Type":"ContainerStarted","Data":"fb6da9f1b6e1c5ea9c983d97d60198441c391892546fcff50f89801de2f5a6bd"} Jan 30 08:43:14 crc kubenswrapper[4870]: I0130 08:43:14.592339 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" podStartSLOduration=2.12668546 podStartE2EDuration="2.592311815s" podCreationTimestamp="2026-01-30 08:43:12 +0000 UTC" firstStartedPulling="2026-01-30 08:43:13.503441966 +0000 UTC m=+2032.198989075" lastFinishedPulling="2026-01-30 08:43:13.969068321 +0000 UTC m=+2032.664615430" observedRunningTime="2026-01-30 08:43:14.584808812 +0000 UTC m=+2033.280355951" watchObservedRunningTime="2026-01-30 08:43:14.592311815 +0000 UTC m=+2033.287858924" Jan 30 08:43:23 crc kubenswrapper[4870]: I0130 08:43:23.646683 4870 generic.go:334] "Generic (PLEG): container finished" podID="7c9e0c7d-dc65-4862-99da-326bc8d45bfd" containerID="fb6da9f1b6e1c5ea9c983d97d60198441c391892546fcff50f89801de2f5a6bd" exitCode=0 Jan 30 08:43:23 crc kubenswrapper[4870]: I0130 08:43:23.646967 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" event={"ID":"7c9e0c7d-dc65-4862-99da-326bc8d45bfd","Type":"ContainerDied","Data":"fb6da9f1b6e1c5ea9c983d97d60198441c391892546fcff50f89801de2f5a6bd"} Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.067619 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.199547 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-ssh-key-openstack-edpm-ipam\") pod \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.200112 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-inventory\") pod \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.200158 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qr9b\" (UniqueName: \"kubernetes.io/projected/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-kube-api-access-5qr9b\") pod \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\" (UID: \"7c9e0c7d-dc65-4862-99da-326bc8d45bfd\") " Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.204402 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-kube-api-access-5qr9b" (OuterVolumeSpecName: "kube-api-access-5qr9b") pod "7c9e0c7d-dc65-4862-99da-326bc8d45bfd" (UID: "7c9e0c7d-dc65-4862-99da-326bc8d45bfd"). InnerVolumeSpecName "kube-api-access-5qr9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.226269 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c9e0c7d-dc65-4862-99da-326bc8d45bfd" (UID: "7c9e0c7d-dc65-4862-99da-326bc8d45bfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.228541 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-inventory" (OuterVolumeSpecName: "inventory") pod "7c9e0c7d-dc65-4862-99da-326bc8d45bfd" (UID: "7c9e0c7d-dc65-4862-99da-326bc8d45bfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.249995 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.250051 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.302050 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.302090 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.302102 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qr9b\" (UniqueName: \"kubernetes.io/projected/7c9e0c7d-dc65-4862-99da-326bc8d45bfd-kube-api-access-5qr9b\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.670051 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" event={"ID":"7c9e0c7d-dc65-4862-99da-326bc8d45bfd","Type":"ContainerDied","Data":"056c785611d96cce791fea952d02746ae7d8237c5a27e5a107a3d1799ffee03a"} Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.670096 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056c785611d96cce791fea952d02746ae7d8237c5a27e5a107a3d1799ffee03a" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.670173 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.766660 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx"] Jan 30 08:43:25 crc kubenswrapper[4870]: E0130 08:43:25.767138 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9e0c7d-dc65-4862-99da-326bc8d45bfd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.767161 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9e0c7d-dc65-4862-99da-326bc8d45bfd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.767406 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9e0c7d-dc65-4862-99da-326bc8d45bfd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.768725 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.774290 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.774480 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.774604 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.774713 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.774819 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.774938 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.775065 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.775512 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.811100 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx"] Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.911551 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.911893 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.911957 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912003 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrqd\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-kube-api-access-glrqd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912057 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912162 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912350 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912426 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912481 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912633 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912691 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912750 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:25 crc kubenswrapper[4870]: I0130 08:43:25.912783 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015125 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrqd\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-kube-api-access-glrqd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015257 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015346 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015447 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015534 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015612 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015670 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015797 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.015845 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.016015 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.016083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.016154 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.016348 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.016393 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.021582 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.021604 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.023637 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.023967 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.024956 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.025270 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.025534 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.025797 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.025909 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.026322 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.028307 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.031075 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.034818 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrqd\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-kube-api-access-glrqd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.036353 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.095046 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.626695 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx"] Jan 30 08:43:26 crc kubenswrapper[4870]: I0130 08:43:26.678548 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" event={"ID":"51d5d5e3-867b-4ec9-9fca-07038b83ba29","Type":"ContainerStarted","Data":"2e7af23946f70ff576e386febda735d9b078867d38c0a82ad1d3ba91aef60fca"} Jan 30 08:43:27 crc kubenswrapper[4870]: I0130 08:43:27.688635 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" event={"ID":"51d5d5e3-867b-4ec9-9fca-07038b83ba29","Type":"ContainerStarted","Data":"5f4fa977239c88cc1bd44a55cd7b7490e077c9fbc2dd85cfb9c83866198395b8"} Jan 30 08:43:27 crc kubenswrapper[4870]: I0130 08:43:27.712602 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" podStartSLOduration=2.249843844 podStartE2EDuration="2.712584809s" podCreationTimestamp="2026-01-30 08:43:25 +0000 UTC" firstStartedPulling="2026-01-30 08:43:26.619771486 +0000 UTC m=+2045.315318595" lastFinishedPulling="2026-01-30 08:43:27.082512451 +0000 UTC m=+2045.778059560" observedRunningTime="2026-01-30 08:43:27.709602766 +0000 UTC m=+2046.405149875" watchObservedRunningTime="2026-01-30 08:43:27.712584809 +0000 UTC m=+2046.408131918" Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.249034 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.249591 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.249752 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.250483 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.250544 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2" gracePeriod=600 Jan 30 08:43:55 crc kubenswrapper[4870]: E0130 08:43:55.360825 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d3c8db6_cf22_4fb2_ae7c_a3d544473a6d.slice/crio-c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.974125 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2" exitCode=0 Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.974200 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2"} Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.974437 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac"} Jan 30 08:43:55 crc kubenswrapper[4870]: I0130 08:43:55.974460 4870 scope.go:117] "RemoveContainer" containerID="792f2902a1f9b1ca12c9f430b11c208dd4f06b6798f364f25a7a0c460c4f9a49" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.495601 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2hdv"] Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.498386 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.508509 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2hdv"] Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.655324 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-utilities\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.655442 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-catalog-content\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.655650 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x48f8\" (UniqueName: \"kubernetes.io/projected/7015d647-81a4-406d-9ea9-50ba0f8376ba-kube-api-access-x48f8\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.757507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x48f8\" (UniqueName: \"kubernetes.io/projected/7015d647-81a4-406d-9ea9-50ba0f8376ba-kube-api-access-x48f8\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.757602 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-utilities\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.757672 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-catalog-content\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.758204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-utilities\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.758204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-catalog-content\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.777502 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x48f8\" (UniqueName: \"kubernetes.io/projected/7015d647-81a4-406d-9ea9-50ba0f8376ba-kube-api-access-x48f8\") pod \"redhat-operators-l2hdv\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:57 crc kubenswrapper[4870]: I0130 08:43:57.834387 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:43:58 crc kubenswrapper[4870]: I0130 08:43:58.332565 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2hdv"] Jan 30 08:43:58 crc kubenswrapper[4870]: W0130 08:43:58.333239 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7015d647_81a4_406d_9ea9_50ba0f8376ba.slice/crio-5ac094d6e426443c5020bca9027144d8416ef01fe23ae02604a7c70ed2fa1c38 WatchSource:0}: Error finding container 5ac094d6e426443c5020bca9027144d8416ef01fe23ae02604a7c70ed2fa1c38: Status 404 returned error can't find the container with id 5ac094d6e426443c5020bca9027144d8416ef01fe23ae02604a7c70ed2fa1c38 Jan 30 08:43:59 crc kubenswrapper[4870]: I0130 08:43:59.009700 4870 generic.go:334] "Generic (PLEG): container finished" podID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerID="7ac3233c81049eeffa09b78437ea9a7c78a5ec459d0969d57b66799e4508c6f7" exitCode=0 Jan 30 08:43:59 crc kubenswrapper[4870]: I0130 08:43:59.010008 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerDied","Data":"7ac3233c81049eeffa09b78437ea9a7c78a5ec459d0969d57b66799e4508c6f7"} Jan 30 08:43:59 crc kubenswrapper[4870]: I0130 08:43:59.010043 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerStarted","Data":"5ac094d6e426443c5020bca9027144d8416ef01fe23ae02604a7c70ed2fa1c38"} Jan 30 08:44:00 crc kubenswrapper[4870]: I0130 08:44:00.022255 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerStarted","Data":"0b5963b6473dc23c8416476fa0419e10a0bd0cb40380f00a8cfac8f3dbeeeadd"} Jan 30 08:44:01 crc kubenswrapper[4870]: I0130 08:44:01.033952 4870 generic.go:334] "Generic (PLEG): container finished" podID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerID="0b5963b6473dc23c8416476fa0419e10a0bd0cb40380f00a8cfac8f3dbeeeadd" exitCode=0 Jan 30 08:44:01 crc kubenswrapper[4870]: I0130 08:44:01.034062 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerDied","Data":"0b5963b6473dc23c8416476fa0419e10a0bd0cb40380f00a8cfac8f3dbeeeadd"} Jan 30 08:44:02 crc kubenswrapper[4870]: I0130 08:44:02.049375 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerStarted","Data":"393953fb27b08df6d340759380c688ecd76bec4ed10755ff84a58927ae244eb1"} Jan 30 08:44:02 crc kubenswrapper[4870]: I0130 08:44:02.072666 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2hdv" podStartSLOduration=2.288960599 podStartE2EDuration="5.072647053s" podCreationTimestamp="2026-01-30 08:43:57 +0000 UTC" firstStartedPulling="2026-01-30 08:43:59.012445395 +0000 UTC m=+2077.707992504" lastFinishedPulling="2026-01-30 08:44:01.796131839 +0000 UTC m=+2080.491678958" observedRunningTime="2026-01-30 08:44:02.070558488 +0000 UTC m=+2080.766105597" watchObservedRunningTime="2026-01-30 08:44:02.072647053 +0000 UTC m=+2080.768194162" Jan 30 08:44:04 crc kubenswrapper[4870]: I0130 08:44:04.086015 4870 generic.go:334] "Generic (PLEG): container finished" podID="51d5d5e3-867b-4ec9-9fca-07038b83ba29" containerID="5f4fa977239c88cc1bd44a55cd7b7490e077c9fbc2dd85cfb9c83866198395b8" exitCode=0 Jan 30 08:44:04 crc kubenswrapper[4870]: I0130 08:44:04.104236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" event={"ID":"51d5d5e3-867b-4ec9-9fca-07038b83ba29","Type":"ContainerDied","Data":"5f4fa977239c88cc1bd44a55cd7b7490e077c9fbc2dd85cfb9c83866198395b8"} Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.540063 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731460 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-libvirt-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731540 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-ovn-default-certs-0\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731626 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-neutron-metadata-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731671 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-nova-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731760 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-bootstrap-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731812 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrqd\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-kube-api-access-glrqd\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-telemetry-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731959 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-repo-setup-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.731986 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.732019 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ovn-combined-ca-bundle\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.732048 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ssh-key-openstack-edpm-ipam\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.732076 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.732110 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-inventory\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.732137 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\" (UID: \"51d5d5e3-867b-4ec9-9fca-07038b83ba29\") " Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.740649 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.740690 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.740966 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.741124 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.741274 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.746117 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.746544 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.746631 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.746954 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.748352 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-kube-api-access-glrqd" (OuterVolumeSpecName: "kube-api-access-glrqd") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "kube-api-access-glrqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.750527 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.753701 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.779927 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.781470 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-inventory" (OuterVolumeSpecName: "inventory") pod "51d5d5e3-867b-4ec9-9fca-07038b83ba29" (UID: "51d5d5e3-867b-4ec9-9fca-07038b83ba29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835402 4870 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835440 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrqd\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-kube-api-access-glrqd\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835450 4870 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835458 4870 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835471 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835483 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835497 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835509 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835523 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835533 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835543 4870 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835552 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51d5d5e3-867b-4ec9-9fca-07038b83ba29-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835560 4870 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:05 crc kubenswrapper[4870]: I0130 08:44:05.835570 4870 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d5d5e3-867b-4ec9-9fca-07038b83ba29-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.104719 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" event={"ID":"51d5d5e3-867b-4ec9-9fca-07038b83ba29","Type":"ContainerDied","Data":"2e7af23946f70ff576e386febda735d9b078867d38c0a82ad1d3ba91aef60fca"} Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.104770 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e7af23946f70ff576e386febda735d9b078867d38c0a82ad1d3ba91aef60fca" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.104810 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.223238 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z"] Jan 30 08:44:06 crc kubenswrapper[4870]: E0130 08:44:06.223701 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d5d5e3-867b-4ec9-9fca-07038b83ba29" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.223725 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d5d5e3-867b-4ec9-9fca-07038b83ba29" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.223937 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d5d5e3-867b-4ec9-9fca-07038b83ba29" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.224602 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.226786 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.226799 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.226975 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.227015 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.228470 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.236768 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z"] Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.345598 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.345695 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.345726 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.345783 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.345934 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvcq\" (UniqueName: \"kubernetes.io/projected/11f380d9-7c41-4b65-a46d-01c14ac81c07-kube-api-access-xlvcq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.447953 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.448023 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvcq\" (UniqueName: \"kubernetes.io/projected/11f380d9-7c41-4b65-a46d-01c14ac81c07-kube-api-access-xlvcq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.448137 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.448190 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.448235 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.449203 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.451978 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.452271 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.453269 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.465828 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvcq\" (UniqueName: \"kubernetes.io/projected/11f380d9-7c41-4b65-a46d-01c14ac81c07-kube-api-access-xlvcq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z72z\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:06 crc kubenswrapper[4870]: I0130 08:44:06.549636 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:44:07 crc kubenswrapper[4870]: I0130 08:44:07.131555 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z"] Jan 30 08:44:07 crc kubenswrapper[4870]: I0130 08:44:07.837051 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:44:07 crc kubenswrapper[4870]: I0130 08:44:07.839251 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:44:07 crc kubenswrapper[4870]: I0130 08:44:07.892392 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:44:08 crc kubenswrapper[4870]: I0130 08:44:08.122758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" event={"ID":"11f380d9-7c41-4b65-a46d-01c14ac81c07","Type":"ContainerStarted","Data":"a16d38c22e97a59730cb881ed767b3b67bb0324d1db69e50abec5c00f259b66e"} Jan 30 08:44:08 crc kubenswrapper[4870]: I0130 08:44:08.173649 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:44:08 crc kubenswrapper[4870]: I0130 08:44:08.226081 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2hdv"] Jan 30 08:44:09 crc kubenswrapper[4870]: I0130 08:44:09.133276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" event={"ID":"11f380d9-7c41-4b65-a46d-01c14ac81c07","Type":"ContainerStarted","Data":"aa4d81ec188518a93eca2918c2b6aef7524e7ef50e3d95e59184540a039c7929"} Jan 30 08:44:09 crc kubenswrapper[4870]: I0130 08:44:09.153046 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" podStartSLOduration=2.340435801 podStartE2EDuration="3.153028434s" podCreationTimestamp="2026-01-30 08:44:06 +0000 UTC" firstStartedPulling="2026-01-30 08:44:07.136363643 +0000 UTC m=+2085.831910752" lastFinishedPulling="2026-01-30 08:44:07.948956276 +0000 UTC m=+2086.644503385" observedRunningTime="2026-01-30 08:44:09.149389941 +0000 UTC m=+2087.844937050" watchObservedRunningTime="2026-01-30 08:44:09.153028434 +0000 UTC m=+2087.848575543" Jan 30 08:44:10 crc kubenswrapper[4870]: I0130 08:44:10.141178 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2hdv" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="registry-server" containerID="cri-o://393953fb27b08df6d340759380c688ecd76bec4ed10755ff84a58927ae244eb1" gracePeriod=2 Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.152249 4870 generic.go:334] "Generic (PLEG): container finished" podID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerID="393953fb27b08df6d340759380c688ecd76bec4ed10755ff84a58927ae244eb1" exitCode=0 Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.152332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerDied","Data":"393953fb27b08df6d340759380c688ecd76bec4ed10755ff84a58927ae244eb1"} Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.689015 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.889332 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-catalog-content\") pod \"7015d647-81a4-406d-9ea9-50ba0f8376ba\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.889564 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x48f8\" (UniqueName: \"kubernetes.io/projected/7015d647-81a4-406d-9ea9-50ba0f8376ba-kube-api-access-x48f8\") pod \"7015d647-81a4-406d-9ea9-50ba0f8376ba\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.889625 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-utilities\") pod \"7015d647-81a4-406d-9ea9-50ba0f8376ba\" (UID: \"7015d647-81a4-406d-9ea9-50ba0f8376ba\") " Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.891001 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-utilities" (OuterVolumeSpecName: "utilities") pod "7015d647-81a4-406d-9ea9-50ba0f8376ba" (UID: "7015d647-81a4-406d-9ea9-50ba0f8376ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.896152 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7015d647-81a4-406d-9ea9-50ba0f8376ba-kube-api-access-x48f8" (OuterVolumeSpecName: "kube-api-access-x48f8") pod "7015d647-81a4-406d-9ea9-50ba0f8376ba" (UID: "7015d647-81a4-406d-9ea9-50ba0f8376ba"). InnerVolumeSpecName "kube-api-access-x48f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.992814 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x48f8\" (UniqueName: \"kubernetes.io/projected/7015d647-81a4-406d-9ea9-50ba0f8376ba-kube-api-access-x48f8\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:11 crc kubenswrapper[4870]: I0130 08:44:11.992856 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.010779 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7015d647-81a4-406d-9ea9-50ba0f8376ba" (UID: "7015d647-81a4-406d-9ea9-50ba0f8376ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.093769 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d647-81a4-406d-9ea9-50ba0f8376ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.170170 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hdv" event={"ID":"7015d647-81a4-406d-9ea9-50ba0f8376ba","Type":"ContainerDied","Data":"5ac094d6e426443c5020bca9027144d8416ef01fe23ae02604a7c70ed2fa1c38"} Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.170247 4870 scope.go:117] "RemoveContainer" containerID="393953fb27b08df6d340759380c688ecd76bec4ed10755ff84a58927ae244eb1" Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.170264 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hdv" Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.193633 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2hdv"] Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.202027 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2hdv"] Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.204709 4870 scope.go:117] "RemoveContainer" containerID="0b5963b6473dc23c8416476fa0419e10a0bd0cb40380f00a8cfac8f3dbeeeadd" Jan 30 08:44:12 crc kubenswrapper[4870]: I0130 08:44:12.229772 4870 scope.go:117] "RemoveContainer" containerID="7ac3233c81049eeffa09b78437ea9a7c78a5ec459d0969d57b66799e4508c6f7" Jan 30 08:44:14 crc kubenswrapper[4870]: I0130 08:44:14.086334 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" path="/var/lib/kubelet/pods/7015d647-81a4-406d-9ea9-50ba0f8376ba/volumes" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.152717 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj"] Jan 30 08:45:00 crc kubenswrapper[4870]: E0130 08:45:00.154379 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="extract-utilities" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.154406 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="extract-utilities" Jan 30 08:45:00 crc kubenswrapper[4870]: E0130 08:45:00.154459 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="extract-content" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.154466 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="extract-content" Jan 30 08:45:00 crc kubenswrapper[4870]: E0130 08:45:00.154479 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.154485 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.154744 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7015d647-81a4-406d-9ea9-50ba0f8376ba" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.155858 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.159510 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.159764 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.172828 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj"] Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.333802 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjts2\" (UniqueName: \"kubernetes.io/projected/e9c91153-7a90-4c60-811f-915f8ccf0bdf-kube-api-access-qjts2\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.334252 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9c91153-7a90-4c60-811f-915f8ccf0bdf-secret-volume\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.334393 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9c91153-7a90-4c60-811f-915f8ccf0bdf-config-volume\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.436869 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9c91153-7a90-4c60-811f-915f8ccf0bdf-config-volume\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.436995 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjts2\" (UniqueName: \"kubernetes.io/projected/e9c91153-7a90-4c60-811f-915f8ccf0bdf-kube-api-access-qjts2\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.437097 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9c91153-7a90-4c60-811f-915f8ccf0bdf-secret-volume\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.438259 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9c91153-7a90-4c60-811f-915f8ccf0bdf-config-volume\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.449178 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9c91153-7a90-4c60-811f-915f8ccf0bdf-secret-volume\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.458675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjts2\" (UniqueName: \"kubernetes.io/projected/e9c91153-7a90-4c60-811f-915f8ccf0bdf-kube-api-access-qjts2\") pod \"collect-profiles-29496045-wr5sj\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.487052 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:00 crc kubenswrapper[4870]: I0130 08:45:00.947706 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj"] Jan 30 08:45:01 crc kubenswrapper[4870]: I0130 08:45:01.676419 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" event={"ID":"e9c91153-7a90-4c60-811f-915f8ccf0bdf","Type":"ContainerStarted","Data":"479ba30159faf1bd5abe17d0fd8bcbe0c86c787b6f5f69ef68ac1e6330cdb3a2"} Jan 30 08:45:01 crc kubenswrapper[4870]: I0130 08:45:01.676707 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" event={"ID":"e9c91153-7a90-4c60-811f-915f8ccf0bdf","Type":"ContainerStarted","Data":"aa224fd3965f88a5a13b568410ba8bb45dead9a3a435594797d76327580780a2"} Jan 30 08:45:01 crc kubenswrapper[4870]: I0130 08:45:01.702246 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" podStartSLOduration=1.702221442 podStartE2EDuration="1.702221442s" podCreationTimestamp="2026-01-30 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:45:01.692791208 +0000 UTC m=+2140.388338327" watchObservedRunningTime="2026-01-30 08:45:01.702221442 +0000 UTC m=+2140.397768561" Jan 30 08:45:02 crc kubenswrapper[4870]: I0130 08:45:02.686229 4870 generic.go:334] "Generic (PLEG): container finished" podID="e9c91153-7a90-4c60-811f-915f8ccf0bdf" containerID="479ba30159faf1bd5abe17d0fd8bcbe0c86c787b6f5f69ef68ac1e6330cdb3a2" exitCode=0 Jan 30 08:45:02 crc kubenswrapper[4870]: I0130 08:45:02.686285 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" event={"ID":"e9c91153-7a90-4c60-811f-915f8ccf0bdf","Type":"ContainerDied","Data":"479ba30159faf1bd5abe17d0fd8bcbe0c86c787b6f5f69ef68ac1e6330cdb3a2"} Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.034114 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.219030 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjts2\" (UniqueName: \"kubernetes.io/projected/e9c91153-7a90-4c60-811f-915f8ccf0bdf-kube-api-access-qjts2\") pod \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.219267 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9c91153-7a90-4c60-811f-915f8ccf0bdf-secret-volume\") pod \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.219462 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9c91153-7a90-4c60-811f-915f8ccf0bdf-config-volume\") pod \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\" (UID: \"e9c91153-7a90-4c60-811f-915f8ccf0bdf\") " Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.220369 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c91153-7a90-4c60-811f-915f8ccf0bdf-config-volume" (OuterVolumeSpecName: "config-volume") pod "e9c91153-7a90-4c60-811f-915f8ccf0bdf" (UID: "e9c91153-7a90-4c60-811f-915f8ccf0bdf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.223102 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9c91153-7a90-4c60-811f-915f8ccf0bdf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.226699 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c91153-7a90-4c60-811f-915f8ccf0bdf-kube-api-access-qjts2" (OuterVolumeSpecName: "kube-api-access-qjts2") pod "e9c91153-7a90-4c60-811f-915f8ccf0bdf" (UID: "e9c91153-7a90-4c60-811f-915f8ccf0bdf"). InnerVolumeSpecName "kube-api-access-qjts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.226750 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c91153-7a90-4c60-811f-915f8ccf0bdf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e9c91153-7a90-4c60-811f-915f8ccf0bdf" (UID: "e9c91153-7a90-4c60-811f-915f8ccf0bdf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.325814 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjts2\" (UniqueName: \"kubernetes.io/projected/e9c91153-7a90-4c60-811f-915f8ccf0bdf-kube-api-access-qjts2\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.325847 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9c91153-7a90-4c60-811f-915f8ccf0bdf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.704429 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" event={"ID":"e9c91153-7a90-4c60-811f-915f8ccf0bdf","Type":"ContainerDied","Data":"aa224fd3965f88a5a13b568410ba8bb45dead9a3a435594797d76327580780a2"} Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.704475 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa224fd3965f88a5a13b568410ba8bb45dead9a3a435594797d76327580780a2" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.704488 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj" Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.774453 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92"] Jan 30 08:45:04 crc kubenswrapper[4870]: I0130 08:45:04.795989 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-25p92"] Jan 30 08:45:06 crc kubenswrapper[4870]: I0130 08:45:06.087705 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fd6b37-eee2-4fd5-aa18-51eecea65a3b" path="/var/lib/kubelet/pods/93fd6b37-eee2-4fd5-aa18-51eecea65a3b/volumes" Jan 30 08:45:12 crc kubenswrapper[4870]: I0130 08:45:12.780964 4870 generic.go:334] "Generic (PLEG): container finished" podID="11f380d9-7c41-4b65-a46d-01c14ac81c07" containerID="aa4d81ec188518a93eca2918c2b6aef7524e7ef50e3d95e59184540a039c7929" exitCode=0 Jan 30 08:45:12 crc kubenswrapper[4870]: I0130 08:45:12.781540 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" event={"ID":"11f380d9-7c41-4b65-a46d-01c14ac81c07","Type":"ContainerDied","Data":"aa4d81ec188518a93eca2918c2b6aef7524e7ef50e3d95e59184540a039c7929"} Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.231362 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.327987 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovn-combined-ca-bundle\") pod \"11f380d9-7c41-4b65-a46d-01c14ac81c07\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.328042 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovncontroller-config-0\") pod \"11f380d9-7c41-4b65-a46d-01c14ac81c07\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.328268 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-inventory\") pod \"11f380d9-7c41-4b65-a46d-01c14ac81c07\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.328290 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ssh-key-openstack-edpm-ipam\") pod \"11f380d9-7c41-4b65-a46d-01c14ac81c07\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.328307 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvcq\" (UniqueName: \"kubernetes.io/projected/11f380d9-7c41-4b65-a46d-01c14ac81c07-kube-api-access-xlvcq\") pod \"11f380d9-7c41-4b65-a46d-01c14ac81c07\" (UID: \"11f380d9-7c41-4b65-a46d-01c14ac81c07\") " Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.333396 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "11f380d9-7c41-4b65-a46d-01c14ac81c07" (UID: "11f380d9-7c41-4b65-a46d-01c14ac81c07"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.333594 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f380d9-7c41-4b65-a46d-01c14ac81c07-kube-api-access-xlvcq" (OuterVolumeSpecName: "kube-api-access-xlvcq") pod "11f380d9-7c41-4b65-a46d-01c14ac81c07" (UID: "11f380d9-7c41-4b65-a46d-01c14ac81c07"). InnerVolumeSpecName "kube-api-access-xlvcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.352400 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "11f380d9-7c41-4b65-a46d-01c14ac81c07" (UID: "11f380d9-7c41-4b65-a46d-01c14ac81c07"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.355022 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-inventory" (OuterVolumeSpecName: "inventory") pod "11f380d9-7c41-4b65-a46d-01c14ac81c07" (UID: "11f380d9-7c41-4b65-a46d-01c14ac81c07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.357231 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11f380d9-7c41-4b65-a46d-01c14ac81c07" (UID: "11f380d9-7c41-4b65-a46d-01c14ac81c07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.430458 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.430498 4870 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11f380d9-7c41-4b65-a46d-01c14ac81c07-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.430508 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.430518 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11f380d9-7c41-4b65-a46d-01c14ac81c07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.430531 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlvcq\" (UniqueName: \"kubernetes.io/projected/11f380d9-7c41-4b65-a46d-01c14ac81c07-kube-api-access-xlvcq\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.800034 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" event={"ID":"11f380d9-7c41-4b65-a46d-01c14ac81c07","Type":"ContainerDied","Data":"a16d38c22e97a59730cb881ed767b3b67bb0324d1db69e50abec5c00f259b66e"} Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.800070 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16d38c22e97a59730cb881ed767b3b67bb0324d1db69e50abec5c00f259b66e" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.800087 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z72z" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.901709 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8"] Jan 30 08:45:14 crc kubenswrapper[4870]: E0130 08:45:14.902313 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f380d9-7c41-4b65-a46d-01c14ac81c07" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.902336 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f380d9-7c41-4b65-a46d-01c14ac81c07" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 08:45:14 crc kubenswrapper[4870]: E0130 08:45:14.902368 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c91153-7a90-4c60-811f-915f8ccf0bdf" containerName="collect-profiles" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.902381 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c91153-7a90-4c60-811f-915f8ccf0bdf" containerName="collect-profiles" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.902607 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c91153-7a90-4c60-811f-915f8ccf0bdf" containerName="collect-profiles" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.902656 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f380d9-7c41-4b65-a46d-01c14ac81c07" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.903540 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.906504 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.906735 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.907006 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.907573 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.908759 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.908856 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:45:14 crc kubenswrapper[4870]: I0130 08:45:14.912279 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8"] Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.041977 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.042060 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.042095 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.042146 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.042362 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwcv\" (UniqueName: \"kubernetes.io/projected/bbcba502-7991-4f7b-bdbd-b112cec436b9-kube-api-access-mcwcv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.042408 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.144270 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.145467 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.145629 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.145862 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwcv\" (UniqueName: \"kubernetes.io/projected/bbcba502-7991-4f7b-bdbd-b112cec436b9-kube-api-access-mcwcv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.146064 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.146199 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.150594 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.150705 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.151121 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.152507 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.152529 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.166161 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwcv\" (UniqueName: \"kubernetes.io/projected/bbcba502-7991-4f7b-bdbd-b112cec436b9-kube-api-access-mcwcv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.235587 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.731670 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8"] Jan 30 08:45:15 crc kubenswrapper[4870]: I0130 08:45:15.811848 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" event={"ID":"bbcba502-7991-4f7b-bdbd-b112cec436b9","Type":"ContainerStarted","Data":"322fb62ba42e648747673d353c56acbccd668b8ace4d3ac0389c386c755de181"} Jan 30 08:45:16 crc kubenswrapper[4870]: I0130 08:45:16.835666 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" event={"ID":"bbcba502-7991-4f7b-bdbd-b112cec436b9","Type":"ContainerStarted","Data":"bee60a9b8d5f543e434738aa5f0e9131d5a086cce903cbf89bbe4f59ccb94b7e"} Jan 30 08:45:16 crc kubenswrapper[4870]: I0130 08:45:16.857987 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" podStartSLOduration=2.375650216 podStartE2EDuration="2.85796892s" podCreationTimestamp="2026-01-30 08:45:14 +0000 UTC" firstStartedPulling="2026-01-30 08:45:15.734761642 +0000 UTC m=+2154.430308751" lastFinishedPulling="2026-01-30 08:45:16.217080326 +0000 UTC m=+2154.912627455" observedRunningTime="2026-01-30 08:45:16.85346494 +0000 UTC m=+2155.549012059" watchObservedRunningTime="2026-01-30 08:45:16.85796892 +0000 UTC m=+2155.553516029" Jan 30 08:45:37 crc kubenswrapper[4870]: I0130 08:45:37.829380 4870 scope.go:117] "RemoveContainer" containerID="2065e95b92696a8bb664d6087b11271d4f8873eafbf3cee077ccf40c2dbf8d79" Jan 30 08:45:55 crc kubenswrapper[4870]: I0130 08:45:55.249072 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:45:55 crc kubenswrapper[4870]: I0130 08:45:55.249595 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:46:04 crc kubenswrapper[4870]: I0130 08:46:04.950638 4870 generic.go:334] "Generic (PLEG): container finished" podID="bbcba502-7991-4f7b-bdbd-b112cec436b9" containerID="bee60a9b8d5f543e434738aa5f0e9131d5a086cce903cbf89bbe4f59ccb94b7e" exitCode=0 Jan 30 08:46:04 crc kubenswrapper[4870]: I0130 08:46:04.950719 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" event={"ID":"bbcba502-7991-4f7b-bdbd-b112cec436b9","Type":"ContainerDied","Data":"bee60a9b8d5f543e434738aa5f0e9131d5a086cce903cbf89bbe4f59ccb94b7e"} Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.322330 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k2s7g"] Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.330708 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.345555 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2s7g"] Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.426634 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-utilities\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.426751 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lscq\" (UniqueName: \"kubernetes.io/projected/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-kube-api-access-5lscq\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.426789 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-catalog-content\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.528663 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-utilities\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.528773 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lscq\" (UniqueName: \"kubernetes.io/projected/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-kube-api-access-5lscq\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.528802 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-catalog-content\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.529267 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-catalog-content\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.529444 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-utilities\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.551004 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lscq\" (UniqueName: \"kubernetes.io/projected/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-kube-api-access-5lscq\") pod \"community-operators-k2s7g\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:05 crc kubenswrapper[4870]: I0130 08:46:05.662172 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.283702 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2s7g"] Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.433818 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.557563 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-metadata-combined-ca-bundle\") pod \"bbcba502-7991-4f7b-bdbd-b112cec436b9\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.557669 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-inventory\") pod \"bbcba502-7991-4f7b-bdbd-b112cec436b9\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.557698 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcwcv\" (UniqueName: \"kubernetes.io/projected/bbcba502-7991-4f7b-bdbd-b112cec436b9-kube-api-access-mcwcv\") pod \"bbcba502-7991-4f7b-bdbd-b112cec436b9\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.557921 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-nova-metadata-neutron-config-0\") pod \"bbcba502-7991-4f7b-bdbd-b112cec436b9\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.557946 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-ssh-key-openstack-edpm-ipam\") pod \"bbcba502-7991-4f7b-bdbd-b112cec436b9\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.558009 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"bbcba502-7991-4f7b-bdbd-b112cec436b9\" (UID: \"bbcba502-7991-4f7b-bdbd-b112cec436b9\") " Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.572066 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bbcba502-7991-4f7b-bdbd-b112cec436b9" (UID: "bbcba502-7991-4f7b-bdbd-b112cec436b9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.583253 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcba502-7991-4f7b-bdbd-b112cec436b9-kube-api-access-mcwcv" (OuterVolumeSpecName: "kube-api-access-mcwcv") pod "bbcba502-7991-4f7b-bdbd-b112cec436b9" (UID: "bbcba502-7991-4f7b-bdbd-b112cec436b9"). InnerVolumeSpecName "kube-api-access-mcwcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.620010 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "bbcba502-7991-4f7b-bdbd-b112cec436b9" (UID: "bbcba502-7991-4f7b-bdbd-b112cec436b9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.657088 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bbcba502-7991-4f7b-bdbd-b112cec436b9" (UID: "bbcba502-7991-4f7b-bdbd-b112cec436b9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.661810 4870 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.661864 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcwcv\" (UniqueName: \"kubernetes.io/projected/bbcba502-7991-4f7b-bdbd-b112cec436b9-kube-api-access-mcwcv\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.661902 4870 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.661918 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.689072 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-inventory" (OuterVolumeSpecName: "inventory") pod "bbcba502-7991-4f7b-bdbd-b112cec436b9" (UID: "bbcba502-7991-4f7b-bdbd-b112cec436b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.694764 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "bbcba502-7991-4f7b-bdbd-b112cec436b9" (UID: "bbcba502-7991-4f7b-bdbd-b112cec436b9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.763281 4870 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.763589 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbcba502-7991-4f7b-bdbd-b112cec436b9-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.981160 4870 generic.go:334] "Generic (PLEG): container finished" podID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerID="1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8" exitCode=0 Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.981237 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerDied","Data":"1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8"} Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.981276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerStarted","Data":"15c6cbff9549dfa3cb8b176dfb869490faf191326af5b9742ef3e438454b1062"} Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.983236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" event={"ID":"bbcba502-7991-4f7b-bdbd-b112cec436b9","Type":"ContainerDied","Data":"322fb62ba42e648747673d353c56acbccd668b8ace4d3ac0389c386c755de181"} Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.983262 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322fb62ba42e648747673d353c56acbccd668b8ace4d3ac0389c386c755de181" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.983318 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8" Jan 30 08:46:06 crc kubenswrapper[4870]: I0130 08:46:06.986486 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.068815 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr"] Jan 30 08:46:07 crc kubenswrapper[4870]: E0130 08:46:07.069417 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcba502-7991-4f7b-bdbd-b112cec436b9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.069442 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcba502-7991-4f7b-bdbd-b112cec436b9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.069699 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcba502-7991-4f7b-bdbd-b112cec436b9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.070648 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.078087 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr"] Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.078334 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.078399 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.078525 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.078628 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.078768 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.178191 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzbr\" (UniqueName: \"kubernetes.io/projected/9e214e41-a575-467c-a053-d6807c4f1512-kube-api-access-sqzbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.178508 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.178587 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.178835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.179001 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.280475 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.280575 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.280657 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.280709 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.280759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzbr\" (UniqueName: \"kubernetes.io/projected/9e214e41-a575-467c-a053-d6807c4f1512-kube-api-access-sqzbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.288419 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.288479 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.288592 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.289039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.308274 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzbr\" (UniqueName: \"kubernetes.io/projected/9e214e41-a575-467c-a053-d6807c4f1512-kube-api-access-sqzbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-26lfr\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.394054 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:46:07 crc kubenswrapper[4870]: I0130 08:46:07.940998 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr"] Jan 30 08:46:07 crc kubenswrapper[4870]: W0130 08:46:07.947414 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e214e41_a575_467c_a053_d6807c4f1512.slice/crio-958101a8528cf0e341326792f2f6ae6259384318c3c17b73fec5224f1c834d1c WatchSource:0}: Error finding container 958101a8528cf0e341326792f2f6ae6259384318c3c17b73fec5224f1c834d1c: Status 404 returned error can't find the container with id 958101a8528cf0e341326792f2f6ae6259384318c3c17b73fec5224f1c834d1c Jan 30 08:46:08 crc kubenswrapper[4870]: I0130 08:46:08.011813 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerStarted","Data":"ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37"} Jan 30 08:46:08 crc kubenswrapper[4870]: I0130 08:46:08.013223 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" event={"ID":"9e214e41-a575-467c-a053-d6807c4f1512","Type":"ContainerStarted","Data":"958101a8528cf0e341326792f2f6ae6259384318c3c17b73fec5224f1c834d1c"} Jan 30 08:46:09 crc kubenswrapper[4870]: I0130 08:46:09.025345 4870 generic.go:334] "Generic (PLEG): container finished" podID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerID="ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37" exitCode=0 Jan 30 08:46:09 crc kubenswrapper[4870]: I0130 08:46:09.025401 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerDied","Data":"ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37"} Jan 30 08:46:09 crc kubenswrapper[4870]: I0130 08:46:09.028012 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" event={"ID":"9e214e41-a575-467c-a053-d6807c4f1512","Type":"ContainerStarted","Data":"849cf3c1bde9a8f54182673096d806dd29610e89f52d8b6fde1f213230d3c284"} Jan 30 08:46:09 crc kubenswrapper[4870]: I0130 08:46:09.064762 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" podStartSLOduration=1.558669259 podStartE2EDuration="2.064739244s" podCreationTimestamp="2026-01-30 08:46:07 +0000 UTC" firstStartedPulling="2026-01-30 08:46:07.952237688 +0000 UTC m=+2206.647784797" lastFinishedPulling="2026-01-30 08:46:08.458307673 +0000 UTC m=+2207.153854782" observedRunningTime="2026-01-30 08:46:09.061849904 +0000 UTC m=+2207.757397023" watchObservedRunningTime="2026-01-30 08:46:09.064739244 +0000 UTC m=+2207.760286363" Jan 30 08:46:10 crc kubenswrapper[4870]: I0130 08:46:10.039008 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerStarted","Data":"f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1"} Jan 30 08:46:10 crc kubenswrapper[4870]: I0130 08:46:10.065539 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k2s7g" podStartSLOduration=2.53888043 podStartE2EDuration="5.06552182s" podCreationTimestamp="2026-01-30 08:46:05 +0000 UTC" firstStartedPulling="2026-01-30 08:46:06.986067614 +0000 UTC m=+2205.681614723" lastFinishedPulling="2026-01-30 08:46:09.512708984 +0000 UTC m=+2208.208256113" observedRunningTime="2026-01-30 08:46:10.056679054 +0000 UTC m=+2208.752226193" watchObservedRunningTime="2026-01-30 08:46:10.06552182 +0000 UTC m=+2208.761068919" Jan 30 08:46:15 crc kubenswrapper[4870]: I0130 08:46:15.662955 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:15 crc kubenswrapper[4870]: I0130 08:46:15.663317 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:15 crc kubenswrapper[4870]: I0130 08:46:15.729569 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:16 crc kubenswrapper[4870]: I0130 08:46:16.141533 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:16 crc kubenswrapper[4870]: I0130 08:46:16.195063 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2s7g"] Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.108439 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k2s7g" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="registry-server" containerID="cri-o://f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1" gracePeriod=2 Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.585069 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.725136 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-utilities\") pod \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.725232 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lscq\" (UniqueName: \"kubernetes.io/projected/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-kube-api-access-5lscq\") pod \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.725297 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-catalog-content\") pod \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\" (UID: \"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1\") " Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.726458 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-utilities" (OuterVolumeSpecName: "utilities") pod "5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" (UID: "5befb693-ddf7-4fe9-b77a-1ec961e1b2f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.784086 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-kube-api-access-5lscq" (OuterVolumeSpecName: "kube-api-access-5lscq") pod "5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" (UID: "5befb693-ddf7-4fe9-b77a-1ec961e1b2f1"). InnerVolumeSpecName "kube-api-access-5lscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.800368 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" (UID: "5befb693-ddf7-4fe9-b77a-1ec961e1b2f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.827453 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.827483 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lscq\" (UniqueName: \"kubernetes.io/projected/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-kube-api-access-5lscq\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:18 crc kubenswrapper[4870]: I0130 08:46:18.827494 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.122278 4870 generic.go:334] "Generic (PLEG): container finished" podID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerID="f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1" exitCode=0 Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.122356 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2s7g" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.122379 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerDied","Data":"f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1"} Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.122930 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2s7g" event={"ID":"5befb693-ddf7-4fe9-b77a-1ec961e1b2f1","Type":"ContainerDied","Data":"15c6cbff9549dfa3cb8b176dfb869490faf191326af5b9742ef3e438454b1062"} Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.122969 4870 scope.go:117] "RemoveContainer" containerID="f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.153836 4870 scope.go:117] "RemoveContainer" containerID="ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.179333 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2s7g"] Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.191763 4870 scope.go:117] "RemoveContainer" containerID="1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.195645 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k2s7g"] Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.263503 4870 scope.go:117] "RemoveContainer" containerID="f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1" Jan 30 08:46:19 crc kubenswrapper[4870]: E0130 08:46:19.263832 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1\": container with ID starting with f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1 not found: ID does not exist" containerID="f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.263861 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1"} err="failed to get container status \"f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1\": rpc error: code = NotFound desc = could not find container \"f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1\": container with ID starting with f106933e145a67e8999b56022a3749d70fbe3673069ce5dcb65b1319a3995fe1 not found: ID does not exist" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.263902 4870 scope.go:117] "RemoveContainer" containerID="ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37" Jan 30 08:46:19 crc kubenswrapper[4870]: E0130 08:46:19.264367 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37\": container with ID starting with ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37 not found: ID does not exist" containerID="ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.264387 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37"} err="failed to get container status \"ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37\": rpc error: code = NotFound desc = could not find container \"ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37\": container with ID starting with ac1d83add4726895af17972200c5228205b0b5c89bfd1defebc40a14e95dbe37 not found: ID does not exist" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.264400 4870 scope.go:117] "RemoveContainer" containerID="1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8" Jan 30 08:46:19 crc kubenswrapper[4870]: E0130 08:46:19.264578 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8\": container with ID starting with 1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8 not found: ID does not exist" containerID="1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8" Jan 30 08:46:19 crc kubenswrapper[4870]: I0130 08:46:19.264594 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8"} err="failed to get container status \"1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8\": rpc error: code = NotFound desc = could not find container \"1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8\": container with ID starting with 1fa08930f61eeae7ff87f60ec6402e00026918ea40b0836370f9478928ccd5d8 not found: ID does not exist" Jan 30 08:46:20 crc kubenswrapper[4870]: I0130 08:46:20.085310 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" path="/var/lib/kubelet/pods/5befb693-ddf7-4fe9-b77a-1ec961e1b2f1/volumes" Jan 30 08:46:25 crc kubenswrapper[4870]: I0130 08:46:25.250265 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:46:25 crc kubenswrapper[4870]: I0130 08:46:25.250673 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.249870 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.250400 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.250455 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.251272 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.251335 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" gracePeriod=600 Jan 30 08:46:55 crc kubenswrapper[4870]: E0130 08:46:55.425098 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.457063 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" exitCode=0 Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.457369 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac"} Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.457528 4870 scope.go:117] "RemoveContainer" containerID="c9fb81e78dfc8d967fae5bac7b245ddd0a04c8e07a775324570150884f7934d2" Jan 30 08:46:55 crc kubenswrapper[4870]: I0130 08:46:55.458131 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:46:55 crc kubenswrapper[4870]: E0130 08:46:55.458366 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.103285 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trmvn"] Jan 30 08:46:57 crc kubenswrapper[4870]: E0130 08:46:57.104050 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="extract-utilities" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.104069 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="extract-utilities" Jan 30 08:46:57 crc kubenswrapper[4870]: E0130 08:46:57.104111 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="registry-server" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.104120 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="registry-server" Jan 30 08:46:57 crc kubenswrapper[4870]: E0130 08:46:57.104135 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="extract-content" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.104142 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="extract-content" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.104423 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5befb693-ddf7-4fe9-b77a-1ec961e1b2f1" containerName="registry-server" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.106205 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.114864 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trmvn"] Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.260386 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-utilities\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.260453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-catalog-content\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.260556 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtf45\" (UniqueName: \"kubernetes.io/projected/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-kube-api-access-wtf45\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.363109 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-utilities\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.363207 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-catalog-content\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.363726 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-utilities\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.364004 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-catalog-content\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.364203 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtf45\" (UniqueName: \"kubernetes.io/projected/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-kube-api-access-wtf45\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.388524 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtf45\" (UniqueName: \"kubernetes.io/projected/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-kube-api-access-wtf45\") pod \"certified-operators-trmvn\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.427037 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:46:57 crc kubenswrapper[4870]: I0130 08:46:57.771955 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trmvn"] Jan 30 08:46:58 crc kubenswrapper[4870]: I0130 08:46:58.488641 4870 generic.go:334] "Generic (PLEG): container finished" podID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerID="fb63265b6df908944c072b8b056481ba1574a9802aca45041ac50a3b50cbe3ff" exitCode=0 Jan 30 08:46:58 crc kubenswrapper[4870]: I0130 08:46:58.488711 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trmvn" event={"ID":"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a","Type":"ContainerDied","Data":"fb63265b6df908944c072b8b056481ba1574a9802aca45041ac50a3b50cbe3ff"} Jan 30 08:46:58 crc kubenswrapper[4870]: I0130 08:46:58.489097 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trmvn" event={"ID":"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a","Type":"ContainerStarted","Data":"d437a8dec9fb30f00a4e34ade8f87279693722da031abd27b5c05ab979ca074a"} Jan 30 08:47:00 crc kubenswrapper[4870]: I0130 08:47:00.509603 4870 generic.go:334] "Generic (PLEG): container finished" podID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerID="d0054c95b532e36424c3acf46688e0ebfdf292697edc56afd3f30e979d09653c" exitCode=0 Jan 30 08:47:00 crc kubenswrapper[4870]: I0130 08:47:00.509670 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trmvn" event={"ID":"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a","Type":"ContainerDied","Data":"d0054c95b532e36424c3acf46688e0ebfdf292697edc56afd3f30e979d09653c"} Jan 30 08:47:01 crc kubenswrapper[4870]: I0130 08:47:01.520766 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trmvn" event={"ID":"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a","Type":"ContainerStarted","Data":"d36182e2b07c4b05bbda48d653498be4113ed91928168f19497faa1185fccd8f"} Jan 30 08:47:07 crc kubenswrapper[4870]: I0130 08:47:07.075394 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:47:07 crc kubenswrapper[4870]: E0130 08:47:07.076271 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:47:07 crc kubenswrapper[4870]: I0130 08:47:07.427935 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:47:07 crc kubenswrapper[4870]: I0130 08:47:07.428066 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:47:07 crc kubenswrapper[4870]: I0130 08:47:07.479921 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:47:07 crc kubenswrapper[4870]: I0130 08:47:07.507343 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trmvn" podStartSLOduration=8.043887033 podStartE2EDuration="10.507318908s" podCreationTimestamp="2026-01-30 08:46:57 +0000 UTC" firstStartedPulling="2026-01-30 08:46:58.490626436 +0000 UTC m=+2257.186173555" lastFinishedPulling="2026-01-30 08:47:00.954058321 +0000 UTC m=+2259.649605430" observedRunningTime="2026-01-30 08:47:01.54151249 +0000 UTC m=+2260.237059609" watchObservedRunningTime="2026-01-30 08:47:07.507318908 +0000 UTC m=+2266.202866027" Jan 30 08:47:07 crc kubenswrapper[4870]: I0130 08:47:07.630588 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:47:09 crc kubenswrapper[4870]: I0130 08:47:09.892319 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trmvn"] Jan 30 08:47:10 crc kubenswrapper[4870]: I0130 08:47:10.615278 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trmvn" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="registry-server" containerID="cri-o://d36182e2b07c4b05bbda48d653498be4113ed91928168f19497faa1185fccd8f" gracePeriod=2 Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.627074 4870 generic.go:334] "Generic (PLEG): container finished" podID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerID="d36182e2b07c4b05bbda48d653498be4113ed91928168f19497faa1185fccd8f" exitCode=0 Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.627146 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trmvn" event={"ID":"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a","Type":"ContainerDied","Data":"d36182e2b07c4b05bbda48d653498be4113ed91928168f19497faa1185fccd8f"} Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.760220 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.862736 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-catalog-content\") pod \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.862838 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-utilities\") pod \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.862948 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtf45\" (UniqueName: \"kubernetes.io/projected/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-kube-api-access-wtf45\") pod \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\" (UID: \"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a\") " Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.863973 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-utilities" (OuterVolumeSpecName: "utilities") pod "6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" (UID: "6a9a4f69-8d1a-4d88-b2cf-97c8c661899a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.873137 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-kube-api-access-wtf45" (OuterVolumeSpecName: "kube-api-access-wtf45") pod "6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" (UID: "6a9a4f69-8d1a-4d88-b2cf-97c8c661899a"). InnerVolumeSpecName "kube-api-access-wtf45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.906802 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" (UID: "6a9a4f69-8d1a-4d88-b2cf-97c8c661899a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.965459 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.965504 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:11 crc kubenswrapper[4870]: I0130 08:47:11.965517 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtf45\" (UniqueName: \"kubernetes.io/projected/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a-kube-api-access-wtf45\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.640245 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trmvn" event={"ID":"6a9a4f69-8d1a-4d88-b2cf-97c8c661899a","Type":"ContainerDied","Data":"d437a8dec9fb30f00a4e34ade8f87279693722da031abd27b5c05ab979ca074a"} Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.640589 4870 scope.go:117] "RemoveContainer" containerID="d36182e2b07c4b05bbda48d653498be4113ed91928168f19497faa1185fccd8f" Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.640379 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trmvn" Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.666452 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trmvn"] Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.667309 4870 scope.go:117] "RemoveContainer" containerID="d0054c95b532e36424c3acf46688e0ebfdf292697edc56afd3f30e979d09653c" Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.676192 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trmvn"] Jan 30 08:47:12 crc kubenswrapper[4870]: I0130 08:47:12.692554 4870 scope.go:117] "RemoveContainer" containerID="fb63265b6df908944c072b8b056481ba1574a9802aca45041ac50a3b50cbe3ff" Jan 30 08:47:14 crc kubenswrapper[4870]: I0130 08:47:14.096147 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" path="/var/lib/kubelet/pods/6a9a4f69-8d1a-4d88-b2cf-97c8c661899a/volumes" Jan 30 08:47:20 crc kubenswrapper[4870]: I0130 08:47:20.074725 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:47:20 crc kubenswrapper[4870]: E0130 08:47:20.075752 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:47:32 crc kubenswrapper[4870]: I0130 08:47:32.083022 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:47:32 crc kubenswrapper[4870]: E0130 08:47:32.086531 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:47:43 crc kubenswrapper[4870]: I0130 08:47:43.074610 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:47:43 crc kubenswrapper[4870]: E0130 08:47:43.075700 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:47:58 crc kubenswrapper[4870]: I0130 08:47:58.074638 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:47:58 crc kubenswrapper[4870]: E0130 08:47:58.075419 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:48:09 crc kubenswrapper[4870]: I0130 08:48:09.074602 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:48:09 crc kubenswrapper[4870]: E0130 08:48:09.075468 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:48:21 crc kubenswrapper[4870]: I0130 08:48:21.075297 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:48:21 crc kubenswrapper[4870]: E0130 08:48:21.076088 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:48:32 crc kubenswrapper[4870]: I0130 08:48:32.082866 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:48:32 crc kubenswrapper[4870]: E0130 08:48:32.084303 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:48:44 crc kubenswrapper[4870]: I0130 08:48:44.076020 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:48:44 crc kubenswrapper[4870]: E0130 08:48:44.077603 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:48:57 crc kubenswrapper[4870]: I0130 08:48:57.076171 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:48:57 crc kubenswrapper[4870]: E0130 08:48:57.077112 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:49:09 crc kubenswrapper[4870]: I0130 08:49:09.074568 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:49:09 crc kubenswrapper[4870]: E0130 08:49:09.076853 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:49:21 crc kubenswrapper[4870]: I0130 08:49:21.074768 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:49:21 crc kubenswrapper[4870]: E0130 08:49:21.075936 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:49:35 crc kubenswrapper[4870]: I0130 08:49:35.075378 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:49:35 crc kubenswrapper[4870]: E0130 08:49:35.080375 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:49:49 crc kubenswrapper[4870]: I0130 08:49:49.074396 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:49:49 crc kubenswrapper[4870]: E0130 08:49:49.075287 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:50:03 crc kubenswrapper[4870]: I0130 08:50:03.075092 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:50:03 crc kubenswrapper[4870]: E0130 08:50:03.075852 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:50:18 crc kubenswrapper[4870]: I0130 08:50:18.075171 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:50:18 crc kubenswrapper[4870]: E0130 08:50:18.076045 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:50:22 crc kubenswrapper[4870]: I0130 08:50:22.451354 4870 generic.go:334] "Generic (PLEG): container finished" podID="9e214e41-a575-467c-a053-d6807c4f1512" containerID="849cf3c1bde9a8f54182673096d806dd29610e89f52d8b6fde1f213230d3c284" exitCode=0 Jan 30 08:50:22 crc kubenswrapper[4870]: I0130 08:50:22.451899 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" event={"ID":"9e214e41-a575-467c-a053-d6807c4f1512","Type":"ContainerDied","Data":"849cf3c1bde9a8f54182673096d806dd29610e89f52d8b6fde1f213230d3c284"} Jan 30 08:50:23 crc kubenswrapper[4870]: I0130 08:50:23.892540 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.014869 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzbr\" (UniqueName: \"kubernetes.io/projected/9e214e41-a575-467c-a053-d6807c4f1512-kube-api-access-sqzbr\") pod \"9e214e41-a575-467c-a053-d6807c4f1512\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.014989 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-inventory\") pod \"9e214e41-a575-467c-a053-d6807c4f1512\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.015068 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-secret-0\") pod \"9e214e41-a575-467c-a053-d6807c4f1512\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.015228 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-ssh-key-openstack-edpm-ipam\") pod \"9e214e41-a575-467c-a053-d6807c4f1512\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.015324 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-combined-ca-bundle\") pod \"9e214e41-a575-467c-a053-d6807c4f1512\" (UID: \"9e214e41-a575-467c-a053-d6807c4f1512\") " Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.020385 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9e214e41-a575-467c-a053-d6807c4f1512" (UID: "9e214e41-a575-467c-a053-d6807c4f1512"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.020440 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e214e41-a575-467c-a053-d6807c4f1512-kube-api-access-sqzbr" (OuterVolumeSpecName: "kube-api-access-sqzbr") pod "9e214e41-a575-467c-a053-d6807c4f1512" (UID: "9e214e41-a575-467c-a053-d6807c4f1512"). InnerVolumeSpecName "kube-api-access-sqzbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.043191 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9e214e41-a575-467c-a053-d6807c4f1512" (UID: "9e214e41-a575-467c-a053-d6807c4f1512"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.044866 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-inventory" (OuterVolumeSpecName: "inventory") pod "9e214e41-a575-467c-a053-d6807c4f1512" (UID: "9e214e41-a575-467c-a053-d6807c4f1512"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.055118 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9e214e41-a575-467c-a053-d6807c4f1512" (UID: "9e214e41-a575-467c-a053-d6807c4f1512"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.117661 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.117996 4870 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.118006 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzbr\" (UniqueName: \"kubernetes.io/projected/9e214e41-a575-467c-a053-d6807c4f1512-kube-api-access-sqzbr\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.118018 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.118028 4870 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e214e41-a575-467c-a053-d6807c4f1512-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.468246 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" event={"ID":"9e214e41-a575-467c-a053-d6807c4f1512","Type":"ContainerDied","Data":"958101a8528cf0e341326792f2f6ae6259384318c3c17b73fec5224f1c834d1c"} Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.468283 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958101a8528cf0e341326792f2f6ae6259384318c3c17b73fec5224f1c834d1c" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.468304 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-26lfr" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.573496 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb"] Jan 30 08:50:24 crc kubenswrapper[4870]: E0130 08:50:24.573949 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="registry-server" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.573965 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="registry-server" Jan 30 08:50:24 crc kubenswrapper[4870]: E0130 08:50:24.573986 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="extract-utilities" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.573993 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="extract-utilities" Jan 30 08:50:24 crc kubenswrapper[4870]: E0130 08:50:24.574009 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e214e41-a575-467c-a053-d6807c4f1512" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.574016 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e214e41-a575-467c-a053-d6807c4f1512" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 08:50:24 crc kubenswrapper[4870]: E0130 08:50:24.574033 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="extract-content" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.574039 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="extract-content" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.574207 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e214e41-a575-467c-a053-d6807c4f1512" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.574226 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9a4f69-8d1a-4d88-b2cf-97c8c661899a" containerName="registry-server" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.574899 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.576820 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.576949 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.577351 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.577555 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.577928 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.583249 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb"] Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.585309 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.585953 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729036 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729095 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729267 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729491 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26f7t\" (UniqueName: \"kubernetes.io/projected/da926ccc-5787-4741-a00c-1163494adb5e-kube-api-access-26f7t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729535 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729674 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.729732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da926ccc-5787-4741-a00c-1163494adb5e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.831676 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.831746 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.831823 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.831868 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.831936 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26f7t\" (UniqueName: \"kubernetes.io/projected/da926ccc-5787-4741-a00c-1163494adb5e-kube-api-access-26f7t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.831969 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.832023 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.832055 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.832085 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da926ccc-5787-4741-a00c-1163494adb5e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.833166 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da926ccc-5787-4741-a00c-1163494adb5e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.838044 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.840592 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.841383 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.847937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.847984 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.847951 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.848573 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.864731 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26f7t\" (UniqueName: \"kubernetes.io/projected/da926ccc-5787-4741-a00c-1163494adb5e-kube-api-access-26f7t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4b7pb\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:24 crc kubenswrapper[4870]: I0130 08:50:24.896534 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:50:25 crc kubenswrapper[4870]: I0130 08:50:25.450607 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb"] Jan 30 08:50:25 crc kubenswrapper[4870]: I0130 08:50:25.477606 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" event={"ID":"da926ccc-5787-4741-a00c-1163494adb5e","Type":"ContainerStarted","Data":"bba6c00493ab2f4ae10ae8d8416c4437552d46fb802ef359a38e7c883b259426"} Jan 30 08:50:26 crc kubenswrapper[4870]: I0130 08:50:26.485841 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" event={"ID":"da926ccc-5787-4741-a00c-1163494adb5e","Type":"ContainerStarted","Data":"ce3daff1bc1672ba26e707a19c7baa5445c66c281c7820bc811779b2c7d174cf"} Jan 30 08:50:26 crc kubenswrapper[4870]: I0130 08:50:26.506993 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" podStartSLOduration=1.9629164399999999 podStartE2EDuration="2.506975423s" podCreationTimestamp="2026-01-30 08:50:24 +0000 UTC" firstStartedPulling="2026-01-30 08:50:25.45000204 +0000 UTC m=+2464.145549159" lastFinishedPulling="2026-01-30 08:50:25.994061033 +0000 UTC m=+2464.689608142" observedRunningTime="2026-01-30 08:50:26.499430268 +0000 UTC m=+2465.194977407" watchObservedRunningTime="2026-01-30 08:50:26.506975423 +0000 UTC m=+2465.202522532" Jan 30 08:50:33 crc kubenswrapper[4870]: I0130 08:50:33.075014 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:50:33 crc kubenswrapper[4870]: E0130 08:50:33.075919 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:50:46 crc kubenswrapper[4870]: I0130 08:50:46.074955 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:50:46 crc kubenswrapper[4870]: E0130 08:50:46.075626 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.313795 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-trcmx"] Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.316939 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.324756 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trcmx"] Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.511055 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-utilities\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.511753 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc59p\" (UniqueName: \"kubernetes.io/projected/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-kube-api-access-vc59p\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.511830 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-catalog-content\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.613535 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-catalog-content\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.613687 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-utilities\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.613819 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc59p\" (UniqueName: \"kubernetes.io/projected/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-kube-api-access-vc59p\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.614109 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-catalog-content\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.614187 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-utilities\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.645164 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc59p\" (UniqueName: \"kubernetes.io/projected/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-kube-api-access-vc59p\") pod \"redhat-marketplace-trcmx\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:49 crc kubenswrapper[4870]: I0130 08:50:49.945063 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:50 crc kubenswrapper[4870]: I0130 08:50:50.421832 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trcmx"] Jan 30 08:50:50 crc kubenswrapper[4870]: I0130 08:50:50.730295 4870 generic.go:334] "Generic (PLEG): container finished" podID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerID="8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270" exitCode=0 Jan 30 08:50:50 crc kubenswrapper[4870]: I0130 08:50:50.730344 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trcmx" event={"ID":"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec","Type":"ContainerDied","Data":"8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270"} Jan 30 08:50:50 crc kubenswrapper[4870]: I0130 08:50:50.730619 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trcmx" event={"ID":"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec","Type":"ContainerStarted","Data":"8a75d01ae5384c4b2a2bea2de803d33bc1fef67f0fed796468b959dc601d5043"} Jan 30 08:50:52 crc kubenswrapper[4870]: I0130 08:50:52.750379 4870 generic.go:334] "Generic (PLEG): container finished" podID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerID="9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8" exitCode=0 Jan 30 08:50:52 crc kubenswrapper[4870]: I0130 08:50:52.750480 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trcmx" event={"ID":"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec","Type":"ContainerDied","Data":"9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8"} Jan 30 08:50:54 crc kubenswrapper[4870]: I0130 08:50:54.839336 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trcmx" event={"ID":"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec","Type":"ContainerStarted","Data":"91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde"} Jan 30 08:50:54 crc kubenswrapper[4870]: I0130 08:50:54.878597 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-trcmx" podStartSLOduration=3.110731107 podStartE2EDuration="5.87857718s" podCreationTimestamp="2026-01-30 08:50:49 +0000 UTC" firstStartedPulling="2026-01-30 08:50:50.731943182 +0000 UTC m=+2489.427490291" lastFinishedPulling="2026-01-30 08:50:53.499789235 +0000 UTC m=+2492.195336364" observedRunningTime="2026-01-30 08:50:54.875228016 +0000 UTC m=+2493.570775135" watchObservedRunningTime="2026-01-30 08:50:54.87857718 +0000 UTC m=+2493.574124289" Jan 30 08:50:59 crc kubenswrapper[4870]: I0130 08:50:59.945988 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:59 crc kubenswrapper[4870]: I0130 08:50:59.946574 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:50:59 crc kubenswrapper[4870]: I0130 08:50:59.998505 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:51:00 crc kubenswrapper[4870]: I0130 08:51:00.937635 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:51:00 crc kubenswrapper[4870]: I0130 08:51:00.982894 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trcmx"] Jan 30 08:51:01 crc kubenswrapper[4870]: I0130 08:51:01.075557 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:51:01 crc kubenswrapper[4870]: E0130 08:51:01.075904 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:51:02 crc kubenswrapper[4870]: I0130 08:51:02.908486 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-trcmx" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="registry-server" containerID="cri-o://91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde" gracePeriod=2 Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.373887 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.544430 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc59p\" (UniqueName: \"kubernetes.io/projected/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-kube-api-access-vc59p\") pod \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.544732 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-utilities\") pod \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.544992 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-catalog-content\") pod \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\" (UID: \"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec\") " Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.545923 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-utilities" (OuterVolumeSpecName: "utilities") pod "87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" (UID: "87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.550779 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.561589 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-kube-api-access-vc59p" (OuterVolumeSpecName: "kube-api-access-vc59p") pod "87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" (UID: "87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec"). InnerVolumeSpecName "kube-api-access-vc59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.585517 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" (UID: "87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.654071 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc59p\" (UniqueName: \"kubernetes.io/projected/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-kube-api-access-vc59p\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.654156 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.920363 4870 generic.go:334] "Generic (PLEG): container finished" podID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerID="91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde" exitCode=0 Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.920416 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trcmx" event={"ID":"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec","Type":"ContainerDied","Data":"91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde"} Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.920453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trcmx" event={"ID":"87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec","Type":"ContainerDied","Data":"8a75d01ae5384c4b2a2bea2de803d33bc1fef67f0fed796468b959dc601d5043"} Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.920473 4870 scope.go:117] "RemoveContainer" containerID="91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.920428 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trcmx" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.947444 4870 scope.go:117] "RemoveContainer" containerID="9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8" Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.957372 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trcmx"] Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.965729 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-trcmx"] Jan 30 08:51:03 crc kubenswrapper[4870]: I0130 08:51:03.982499 4870 scope.go:117] "RemoveContainer" containerID="8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.022451 4870 scope.go:117] "RemoveContainer" containerID="91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde" Jan 30 08:51:04 crc kubenswrapper[4870]: E0130 08:51:04.022985 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde\": container with ID starting with 91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde not found: ID does not exist" containerID="91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.023028 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde"} err="failed to get container status \"91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde\": rpc error: code = NotFound desc = could not find container \"91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde\": container with ID starting with 91879a3d12cdb103682c88b869a7a8eafe18c4b3b76fbf459fb48f2a8c17fcde not found: ID does not exist" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.023083 4870 scope.go:117] "RemoveContainer" containerID="9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8" Jan 30 08:51:04 crc kubenswrapper[4870]: E0130 08:51:04.023543 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8\": container with ID starting with 9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8 not found: ID does not exist" containerID="9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.023594 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8"} err="failed to get container status \"9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8\": rpc error: code = NotFound desc = could not find container \"9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8\": container with ID starting with 9c90272b72bb2b233b86a8b1d5e7a016e253e833dc283bd605c0711e8cd1d6c8 not found: ID does not exist" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.023612 4870 scope.go:117] "RemoveContainer" containerID="8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270" Jan 30 08:51:04 crc kubenswrapper[4870]: E0130 08:51:04.024148 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270\": container with ID starting with 8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270 not found: ID does not exist" containerID="8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.024183 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270"} err="failed to get container status \"8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270\": rpc error: code = NotFound desc = could not find container \"8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270\": container with ID starting with 8a0125be7ea3ce2393a8d7b81dc51bb9070b89ed613cec2a0dd8fdb27d330270 not found: ID does not exist" Jan 30 08:51:04 crc kubenswrapper[4870]: I0130 08:51:04.087063 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" path="/var/lib/kubelet/pods/87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec/volumes" Jan 30 08:51:14 crc kubenswrapper[4870]: I0130 08:51:14.075186 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:51:14 crc kubenswrapper[4870]: E0130 08:51:14.076310 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:51:25 crc kubenswrapper[4870]: I0130 08:51:25.076242 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:51:25 crc kubenswrapper[4870]: E0130 08:51:25.076988 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:51:40 crc kubenswrapper[4870]: I0130 08:51:40.074464 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:51:40 crc kubenswrapper[4870]: E0130 08:51:40.075470 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:51:51 crc kubenswrapper[4870]: I0130 08:51:51.075364 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:51:51 crc kubenswrapper[4870]: E0130 08:51:51.076085 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:52:03 crc kubenswrapper[4870]: I0130 08:52:03.075997 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:52:03 crc kubenswrapper[4870]: I0130 08:52:03.431965 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"9db7902fb455898f2a67824f5d2bac1880accc6a6ce6fbe42d5af520837903b4"} Jan 30 08:52:44 crc kubenswrapper[4870]: I0130 08:52:44.860322 4870 generic.go:334] "Generic (PLEG): container finished" podID="da926ccc-5787-4741-a00c-1163494adb5e" containerID="ce3daff1bc1672ba26e707a19c7baa5445c66c281c7820bc811779b2c7d174cf" exitCode=0 Jan 30 08:52:44 crc kubenswrapper[4870]: I0130 08:52:44.860374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" event={"ID":"da926ccc-5787-4741-a00c-1163494adb5e","Type":"ContainerDied","Data":"ce3daff1bc1672ba26e707a19c7baa5445c66c281c7820bc811779b2c7d174cf"} Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.400680 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.450171 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da926ccc-5787-4741-a00c-1163494adb5e-nova-extra-config-0\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.450476 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-0\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.450653 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-0\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.450965 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-combined-ca-bundle\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.451520 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-1\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.451640 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-inventory\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.451771 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-ssh-key-openstack-edpm-ipam\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.451925 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-1\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.452040 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26f7t\" (UniqueName: \"kubernetes.io/projected/da926ccc-5787-4741-a00c-1163494adb5e-kube-api-access-26f7t\") pod \"da926ccc-5787-4741-a00c-1163494adb5e\" (UID: \"da926ccc-5787-4741-a00c-1163494adb5e\") " Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.457213 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da926ccc-5787-4741-a00c-1163494adb5e-kube-api-access-26f7t" (OuterVolumeSpecName: "kube-api-access-26f7t") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "kube-api-access-26f7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.457322 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.484335 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-inventory" (OuterVolumeSpecName: "inventory") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.486289 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.486512 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da926ccc-5787-4741-a00c-1163494adb5e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.486776 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.493205 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.497606 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.498545 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da926ccc-5787-4741-a00c-1163494adb5e" (UID: "da926ccc-5787-4741-a00c-1163494adb5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.554929 4870 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.554981 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.554991 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.555000 4870 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.555009 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26f7t\" (UniqueName: \"kubernetes.io/projected/da926ccc-5787-4741-a00c-1163494adb5e-kube-api-access-26f7t\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.555021 4870 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/da926ccc-5787-4741-a00c-1163494adb5e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.555030 4870 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.555038 4870 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.555051 4870 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da926ccc-5787-4741-a00c-1163494adb5e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.883031 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" event={"ID":"da926ccc-5787-4741-a00c-1163494adb5e","Type":"ContainerDied","Data":"bba6c00493ab2f4ae10ae8d8416c4437552d46fb802ef359a38e7c883b259426"} Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.883101 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba6c00493ab2f4ae10ae8d8416c4437552d46fb802ef359a38e7c883b259426" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.883051 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4b7pb" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974173 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz"] Jan 30 08:52:46 crc kubenswrapper[4870]: E0130 08:52:46.974554 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="extract-utilities" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974570 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="extract-utilities" Jan 30 08:52:46 crc kubenswrapper[4870]: E0130 08:52:46.974588 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da926ccc-5787-4741-a00c-1163494adb5e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974595 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="da926ccc-5787-4741-a00c-1163494adb5e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 08:52:46 crc kubenswrapper[4870]: E0130 08:52:46.974621 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="extract-content" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974627 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="extract-content" Jan 30 08:52:46 crc kubenswrapper[4870]: E0130 08:52:46.974637 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="registry-server" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974643 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="registry-server" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974812 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f5cab1-c7ab-4b3f-9ae6-f050d22c06ec" containerName="registry-server" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.974832 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="da926ccc-5787-4741-a00c-1163494adb5e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.975439 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.985696 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz"] Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.986015 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.986175 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.986335 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:52:46 crc kubenswrapper[4870]: I0130 08:52:46.986754 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n6f5w" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.002483 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.064716 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.064829 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdzp2\" (UniqueName: \"kubernetes.io/projected/1e93cbad-07e7-4073-a577-b666a6901a1d-kube-api-access-gdzp2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.064907 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.065015 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.065156 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.065206 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.065284 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167408 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167500 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167563 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167638 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167765 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdzp2\" (UniqueName: \"kubernetes.io/projected/1e93cbad-07e7-4073-a577-b666a6901a1d-kube-api-access-gdzp2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167899 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.167949 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.174148 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.174422 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.174957 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.176118 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.181277 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.182015 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.193227 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdzp2\" (UniqueName: \"kubernetes.io/projected/1e93cbad-07e7-4073-a577-b666a6901a1d-kube-api-access-gdzp2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.312109 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.824158 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.826608 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz"] Jan 30 08:52:47 crc kubenswrapper[4870]: I0130 08:52:47.894662 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" event={"ID":"1e93cbad-07e7-4073-a577-b666a6901a1d","Type":"ContainerStarted","Data":"20ba77c8c428673fee3b11764831b8a484f339cb904a99f86474d502491153f0"} Jan 30 08:52:49 crc kubenswrapper[4870]: I0130 08:52:49.913513 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" event={"ID":"1e93cbad-07e7-4073-a577-b666a6901a1d","Type":"ContainerStarted","Data":"985b10315ccae1ef801aaf273960bc89bc06483a8a1faada7eb0a77adb779982"} Jan 30 08:52:49 crc kubenswrapper[4870]: I0130 08:52:49.933856 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" podStartSLOduration=2.95598236 podStartE2EDuration="3.933827783s" podCreationTimestamp="2026-01-30 08:52:46 +0000 UTC" firstStartedPulling="2026-01-30 08:52:47.823686273 +0000 UTC m=+2606.519233382" lastFinishedPulling="2026-01-30 08:52:48.801531686 +0000 UTC m=+2607.497078805" observedRunningTime="2026-01-30 08:52:49.930276102 +0000 UTC m=+2608.625823211" watchObservedRunningTime="2026-01-30 08:52:49.933827783 +0000 UTC m=+2608.629374892" Jan 30 08:54:25 crc kubenswrapper[4870]: I0130 08:54:25.249493 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:54:25 crc kubenswrapper[4870]: I0130 08:54:25.250023 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:54:51 crc kubenswrapper[4870]: I0130 08:54:51.075378 4870 generic.go:334] "Generic (PLEG): container finished" podID="1e93cbad-07e7-4073-a577-b666a6901a1d" containerID="985b10315ccae1ef801aaf273960bc89bc06483a8a1faada7eb0a77adb779982" exitCode=0 Jan 30 08:54:51 crc kubenswrapper[4870]: I0130 08:54:51.075459 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" event={"ID":"1e93cbad-07e7-4073-a577-b666a6901a1d","Type":"ContainerDied","Data":"985b10315ccae1ef801aaf273960bc89bc06483a8a1faada7eb0a77adb779982"} Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.608896 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.755832 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdzp2\" (UniqueName: \"kubernetes.io/projected/1e93cbad-07e7-4073-a577-b666a6901a1d-kube-api-access-gdzp2\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.756110 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-telemetry-combined-ca-bundle\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.756157 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ssh-key-openstack-edpm-ipam\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.756237 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-2\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.756265 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-inventory\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.756287 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-1\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.756359 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-0\") pod \"1e93cbad-07e7-4073-a577-b666a6901a1d\" (UID: \"1e93cbad-07e7-4073-a577-b666a6901a1d\") " Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.765198 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.765418 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e93cbad-07e7-4073-a577-b666a6901a1d-kube-api-access-gdzp2" (OuterVolumeSpecName: "kube-api-access-gdzp2") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "kube-api-access-gdzp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.797318 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-inventory" (OuterVolumeSpecName: "inventory") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.798008 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.798811 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.799107 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.801111 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e93cbad-07e7-4073-a577-b666a6901a1d" (UID: "1e93cbad-07e7-4073-a577-b666a6901a1d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859000 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdzp2\" (UniqueName: \"kubernetes.io/projected/1e93cbad-07e7-4073-a577-b666a6901a1d-kube-api-access-gdzp2\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859050 4870 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859066 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859078 4870 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859094 4870 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859107 4870 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:52 crc kubenswrapper[4870]: I0130 08:54:52.859119 4870 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e93cbad-07e7-4073-a577-b666a6901a1d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:53 crc kubenswrapper[4870]: I0130 08:54:53.127354 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" event={"ID":"1e93cbad-07e7-4073-a577-b666a6901a1d","Type":"ContainerDied","Data":"20ba77c8c428673fee3b11764831b8a484f339cb904a99f86474d502491153f0"} Jan 30 08:54:53 crc kubenswrapper[4870]: I0130 08:54:53.127421 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ba77c8c428673fee3b11764831b8a484f339cb904a99f86474d502491153f0" Jan 30 08:54:53 crc kubenswrapper[4870]: I0130 08:54:53.127445 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz" Jan 30 08:54:55 crc kubenswrapper[4870]: I0130 08:54:55.249795 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:54:55 crc kubenswrapper[4870]: I0130 08:54:55.250226 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.806270 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ttj46"] Jan 30 08:55:14 crc kubenswrapper[4870]: E0130 08:55:14.808201 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e93cbad-07e7-4073-a577-b666a6901a1d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.808219 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e93cbad-07e7-4073-a577-b666a6901a1d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.808657 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e93cbad-07e7-4073-a577-b666a6901a1d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.810153 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.832855 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttj46"] Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.905218 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-utilities\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.905811 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-catalog-content\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:14 crc kubenswrapper[4870]: I0130 08:55:14.905934 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdrw\" (UniqueName: \"kubernetes.io/projected/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-kube-api-access-zqdrw\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.007674 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-utilities\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.007782 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-catalog-content\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.007909 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdrw\" (UniqueName: \"kubernetes.io/projected/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-kube-api-access-zqdrw\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.008966 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-utilities\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.009260 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-catalog-content\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.032799 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdrw\" (UniqueName: \"kubernetes.io/projected/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-kube-api-access-zqdrw\") pod \"redhat-operators-ttj46\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.184292 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:15 crc kubenswrapper[4870]: I0130 08:55:15.697836 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttj46"] Jan 30 08:55:16 crc kubenswrapper[4870]: I0130 08:55:16.362231 4870 generic.go:334] "Generic (PLEG): container finished" podID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerID="4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223" exitCode=0 Jan 30 08:55:16 crc kubenswrapper[4870]: I0130 08:55:16.362366 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerDied","Data":"4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223"} Jan 30 08:55:16 crc kubenswrapper[4870]: I0130 08:55:16.362596 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerStarted","Data":"edcb4436070891faee3a6385c93795c073a934cb14d6e9e2233f2ac428560b42"} Jan 30 08:55:18 crc kubenswrapper[4870]: I0130 08:55:18.426160 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerStarted","Data":"c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55"} Jan 30 08:55:20 crc kubenswrapper[4870]: I0130 08:55:20.443924 4870 generic.go:334] "Generic (PLEG): container finished" podID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerID="c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55" exitCode=0 Jan 30 08:55:20 crc kubenswrapper[4870]: I0130 08:55:20.444189 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerDied","Data":"c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55"} Jan 30 08:55:22 crc kubenswrapper[4870]: I0130 08:55:22.463989 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerStarted","Data":"244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb"} Jan 30 08:55:22 crc kubenswrapper[4870]: I0130 08:55:22.495806 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ttj46" podStartSLOduration=3.214222975 podStartE2EDuration="8.495788539s" podCreationTimestamp="2026-01-30 08:55:14 +0000 UTC" firstStartedPulling="2026-01-30 08:55:16.364409975 +0000 UTC m=+2755.059957084" lastFinishedPulling="2026-01-30 08:55:21.645975519 +0000 UTC m=+2760.341522648" observedRunningTime="2026-01-30 08:55:22.488536712 +0000 UTC m=+2761.184083821" watchObservedRunningTime="2026-01-30 08:55:22.495788539 +0000 UTC m=+2761.191335648" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.185624 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.186169 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.249997 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.250054 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.250104 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.250901 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9db7902fb455898f2a67824f5d2bac1880accc6a6ce6fbe42d5af520837903b4"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.250966 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://9db7902fb455898f2a67824f5d2bac1880accc6a6ce6fbe42d5af520837903b4" gracePeriod=600 Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.491764 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="9db7902fb455898f2a67824f5d2bac1880accc6a6ce6fbe42d5af520837903b4" exitCode=0 Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.491817 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"9db7902fb455898f2a67824f5d2bac1880accc6a6ce6fbe42d5af520837903b4"} Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.491867 4870 scope.go:117] "RemoveContainer" containerID="c5e496759a03317cce950bd3f63bcb5ceea02cbee0d1be311b1868af5eaedaac" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.853074 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.857919 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.861188 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.869907 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917185 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-config-data\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917624 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-nvme\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917686 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-run\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917730 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917809 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-config-data-custom\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917859 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-scripts\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.917929 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2g6\" (UniqueName: \"kubernetes.io/projected/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-kube-api-access-mt2g6\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918001 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-dev\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918052 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-sys\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918070 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918087 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918196 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.918425 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-lib-modules\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.966476 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.968427 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.970624 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 30 08:55:25 crc kubenswrapper[4870]: I0130 08:55:25.993078 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.002980 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.004750 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.006621 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.021989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-lib-modules\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022061 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022095 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-config-data\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-nvme\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022147 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-run\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022171 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-config-data-custom\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022246 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-scripts\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022276 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt2g6\" (UniqueName: \"kubernetes.io/projected/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-kube-api-access-mt2g6\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022310 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-dev\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022334 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-sys\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022353 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022382 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022428 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022610 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022653 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-lib-modules\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.022915 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.024559 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-nvme\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.024615 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-run\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.024818 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.024894 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-dev\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.024923 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-sys\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.024948 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.025654 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.031103 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.037537 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-config-data-custom\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.041994 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.044939 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-scripts\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.057873 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt2g6\" (UniqueName: \"kubernetes.io/projected/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-kube-api-access-mt2g6\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.058659 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecf4070-2dd9-496d-bf4d-7f456eb6ed72-config-data\") pod \"cinder-backup-0\" (UID: \"cecf4070-2dd9-496d-bf4d-7f456eb6ed72\") " pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125320 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92n5\" (UniqueName: \"kubernetes.io/projected/56215e10-017e-4662-92ab-8f25178c0fab-kube-api-access-t92n5\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-dev\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125424 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125444 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-run\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125468 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125487 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125508 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125566 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125590 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125605 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125620 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125635 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125680 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125715 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125745 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125770 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125792 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125812 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125860 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125937 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.125976 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.126007 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.126043 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-sys\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.126253 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.126332 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.126409 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.126547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjbs\" (UniqueName: \"kubernetes.io/projected/06465a52-3f34-45fd-b95e-e679adcb59e6-kube-api-access-lrjbs\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.217980 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228357 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrjbs\" (UniqueName: \"kubernetes.io/projected/06465a52-3f34-45fd-b95e-e679adcb59e6-kube-api-access-lrjbs\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228430 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92n5\" (UniqueName: \"kubernetes.io/projected/56215e10-017e-4662-92ab-8f25178c0fab-kube-api-access-t92n5\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228450 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228558 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228690 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-dev\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228468 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-dev\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228825 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228850 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-run\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228889 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228909 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228931 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228967 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.228986 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229009 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229024 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229039 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229055 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229093 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229117 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229139 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229154 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229186 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229202 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229220 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229239 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229266 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229289 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229312 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229333 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-sys\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229361 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229376 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.229436 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230249 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-run\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230306 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230360 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230392 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230530 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230529 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230612 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230648 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-sys\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230684 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230710 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230743 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.230791 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.231364 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.231432 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.231466 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.231504 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/06465a52-3f34-45fd-b95e-e679adcb59e6-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.231911 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/56215e10-017e-4662-92ab-8f25178c0fab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.234163 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ttj46" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" probeResult="failure" output=< Jan 30 08:55:26 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:55:26 crc kubenswrapper[4870]: > Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.234598 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.235022 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.236000 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.236862 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.239432 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.239839 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.242628 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56215e10-017e-4662-92ab-8f25178c0fab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.248703 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06465a52-3f34-45fd-b95e-e679adcb59e6-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.250828 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrjbs\" (UniqueName: \"kubernetes.io/projected/06465a52-3f34-45fd-b95e-e679adcb59e6-kube-api-access-lrjbs\") pod \"cinder-volume-nfs-0\" (UID: \"06465a52-3f34-45fd-b95e-e679adcb59e6\") " pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.251039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92n5\" (UniqueName: \"kubernetes.io/projected/56215e10-017e-4662-92ab-8f25178c0fab-kube-api-access-t92n5\") pod \"cinder-volume-nfs-2-0\" (UID: \"56215e10-017e-4662-92ab-8f25178c0fab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.285808 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.439195 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.553317 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283"} Jan 30 08:55:26 crc kubenswrapper[4870]: I0130 08:55:26.913167 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 08:55:27 crc kubenswrapper[4870]: I0130 08:55:27.132744 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 30 08:55:27 crc kubenswrapper[4870]: I0130 08:55:27.253172 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 30 08:55:27 crc kubenswrapper[4870]: W0130 08:55:27.445938 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06465a52_3f34_45fd_b95e_e679adcb59e6.slice/crio-d7dfbf5cd33ed1cccec4aaeb8099c65062912abd65c6c1a41e5269770529fbef WatchSource:0}: Error finding container d7dfbf5cd33ed1cccec4aaeb8099c65062912abd65c6c1a41e5269770529fbef: Status 404 returned error can't find the container with id d7dfbf5cd33ed1cccec4aaeb8099c65062912abd65c6c1a41e5269770529fbef Jan 30 08:55:27 crc kubenswrapper[4870]: I0130 08:55:27.587017 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"56215e10-017e-4662-92ab-8f25178c0fab","Type":"ContainerStarted","Data":"d6ff31af0b9c693f7878c9396efb9be4b00263c5fd248dfa95cc3b26021e7e19"} Jan 30 08:55:27 crc kubenswrapper[4870]: I0130 08:55:27.618577 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"cecf4070-2dd9-496d-bf4d-7f456eb6ed72","Type":"ContainerStarted","Data":"23e8b8de0d67143b1cf5603519e372944b671a9479a54617f6f64c87ac458e6d"} Jan 30 08:55:27 crc kubenswrapper[4870]: I0130 08:55:27.636976 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"06465a52-3f34-45fd-b95e-e679adcb59e6","Type":"ContainerStarted","Data":"d7dfbf5cd33ed1cccec4aaeb8099c65062912abd65c6c1a41e5269770529fbef"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.648163 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"06465a52-3f34-45fd-b95e-e679adcb59e6","Type":"ContainerStarted","Data":"858df5bdce00185c5e4fe9ee5ded6e1b854d84ac8baf1197016c54a12d797c4a"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.648742 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"06465a52-3f34-45fd-b95e-e679adcb59e6","Type":"ContainerStarted","Data":"16c32da4b856d2522ef7f18800a180372b7266486fc968f6e1635bbd59831b3f"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.657652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"56215e10-017e-4662-92ab-8f25178c0fab","Type":"ContainerStarted","Data":"adaa71de8c46c4ed98102bf2d93148478c3797029dd38cab3a7e607f27abee56"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.657698 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"56215e10-017e-4662-92ab-8f25178c0fab","Type":"ContainerStarted","Data":"1a426c582aed3f1f227d92cdaccf150a841885fc6541669ad6d0a2e1ea570008"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.661976 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"cecf4070-2dd9-496d-bf4d-7f456eb6ed72","Type":"ContainerStarted","Data":"21d9690d3844574cbfcb154865410a91fa68f0a2b1537f5109cd02161959d703"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.662564 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"cecf4070-2dd9-496d-bf4d-7f456eb6ed72","Type":"ContainerStarted","Data":"78c12f1ff4d4aa25d08587cb6d42b529c6ac38ece09f08fd2bf4b617cb236fe6"} Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.674295 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.520329712 podStartE2EDuration="3.674279214s" podCreationTimestamp="2026-01-30 08:55:25 +0000 UTC" firstStartedPulling="2026-01-30 08:55:27.456649319 +0000 UTC m=+2766.152196418" lastFinishedPulling="2026-01-30 08:55:27.610598821 +0000 UTC m=+2766.306145920" observedRunningTime="2026-01-30 08:55:28.672510829 +0000 UTC m=+2767.368057928" watchObservedRunningTime="2026-01-30 08:55:28.674279214 +0000 UTC m=+2767.369826323" Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.711898 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.344351473 podStartE2EDuration="3.711850918s" podCreationTimestamp="2026-01-30 08:55:25 +0000 UTC" firstStartedPulling="2026-01-30 08:55:27.15525255 +0000 UTC m=+2765.850799659" lastFinishedPulling="2026-01-30 08:55:27.522751985 +0000 UTC m=+2766.218299104" observedRunningTime="2026-01-30 08:55:28.703244739 +0000 UTC m=+2767.398791858" watchObservedRunningTime="2026-01-30 08:55:28.711850918 +0000 UTC m=+2767.407398027" Jan 30 08:55:28 crc kubenswrapper[4870]: I0130 08:55:28.731921 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.496258669 podStartE2EDuration="3.731903834s" podCreationTimestamp="2026-01-30 08:55:25 +0000 UTC" firstStartedPulling="2026-01-30 08:55:26.919682858 +0000 UTC m=+2765.615229967" lastFinishedPulling="2026-01-30 08:55:27.155328013 +0000 UTC m=+2765.850875132" observedRunningTime="2026-01-30 08:55:28.728167028 +0000 UTC m=+2767.423714137" watchObservedRunningTime="2026-01-30 08:55:28.731903834 +0000 UTC m=+2767.427450943" Jan 30 08:55:31 crc kubenswrapper[4870]: I0130 08:55:31.218156 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 30 08:55:31 crc kubenswrapper[4870]: I0130 08:55:31.287091 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:31 crc kubenswrapper[4870]: I0130 08:55:31.439597 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:36 crc kubenswrapper[4870]: I0130 08:55:36.262170 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ttj46" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" probeResult="failure" output=< Jan 30 08:55:36 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:55:36 crc kubenswrapper[4870]: > Jan 30 08:55:36 crc kubenswrapper[4870]: I0130 08:55:36.464137 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 30 08:55:37 crc kubenswrapper[4870]: I0130 08:55:37.040285 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 30 08:55:37 crc kubenswrapper[4870]: I0130 08:55:37.130048 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 30 08:55:46 crc kubenswrapper[4870]: I0130 08:55:46.236222 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ttj46" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" probeResult="failure" output=< Jan 30 08:55:46 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:55:46 crc kubenswrapper[4870]: > Jan 30 08:55:56 crc kubenswrapper[4870]: I0130 08:55:56.232387 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ttj46" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" probeResult="failure" output=< Jan 30 08:55:56 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 08:55:56 crc kubenswrapper[4870]: > Jan 30 08:56:05 crc kubenswrapper[4870]: I0130 08:56:05.236801 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:56:05 crc kubenswrapper[4870]: I0130 08:56:05.288070 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:56:05 crc kubenswrapper[4870]: I0130 08:56:05.475140 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ttj46"] Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.059225 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ttj46" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" containerID="cri-o://244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb" gracePeriod=2 Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.600682 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.786977 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-utilities\") pod \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.787448 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-catalog-content\") pod \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.787570 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqdrw\" (UniqueName: \"kubernetes.io/projected/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-kube-api-access-zqdrw\") pod \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\" (UID: \"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8\") " Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.789203 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-utilities" (OuterVolumeSpecName: "utilities") pod "b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" (UID: "b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.794925 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-kube-api-access-zqdrw" (OuterVolumeSpecName: "kube-api-access-zqdrw") pod "b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" (UID: "b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8"). InnerVolumeSpecName "kube-api-access-zqdrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.890756 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqdrw\" (UniqueName: \"kubernetes.io/projected/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-kube-api-access-zqdrw\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.890824 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.903399 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" (UID: "b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:07 crc kubenswrapper[4870]: I0130 08:56:07.991822 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.070980 4870 generic.go:334] "Generic (PLEG): container finished" podID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerID="244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb" exitCode=0 Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.071030 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerDied","Data":"244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb"} Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.071063 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttj46" event={"ID":"b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8","Type":"ContainerDied","Data":"edcb4436070891faee3a6385c93795c073a934cb14d6e9e2233f2ac428560b42"} Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.071082 4870 scope.go:117] "RemoveContainer" containerID="244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.071961 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttj46" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.124782 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ttj46"] Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.131775 4870 scope.go:117] "RemoveContainer" containerID="c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.134644 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ttj46"] Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.154094 4870 scope.go:117] "RemoveContainer" containerID="4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.194285 4870 scope.go:117] "RemoveContainer" containerID="244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb" Jan 30 08:56:08 crc kubenswrapper[4870]: E0130 08:56:08.194695 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb\": container with ID starting with 244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb not found: ID does not exist" containerID="244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.194732 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb"} err="failed to get container status \"244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb\": rpc error: code = NotFound desc = could not find container \"244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb\": container with ID starting with 244c17a2d3dadb1346e7260f17f9015369b980c0ff0706d749ce8a104161debb not found: ID does not exist" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.194755 4870 scope.go:117] "RemoveContainer" containerID="c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55" Jan 30 08:56:08 crc kubenswrapper[4870]: E0130 08:56:08.195216 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55\": container with ID starting with c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55 not found: ID does not exist" containerID="c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.195246 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55"} err="failed to get container status \"c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55\": rpc error: code = NotFound desc = could not find container \"c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55\": container with ID starting with c750e14ec0d483905810cdf2c55155d8e55a2cd9015dc13d53c80e893c497f55 not found: ID does not exist" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.195262 4870 scope.go:117] "RemoveContainer" containerID="4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223" Jan 30 08:56:08 crc kubenswrapper[4870]: E0130 08:56:08.196198 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223\": container with ID starting with 4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223 not found: ID does not exist" containerID="4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223" Jan 30 08:56:08 crc kubenswrapper[4870]: I0130 08:56:08.196340 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223"} err="failed to get container status \"4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223\": rpc error: code = NotFound desc = could not find container \"4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223\": container with ID starting with 4b0288aa2f1e5ed2aa5e341896bd0a25bf0ee307d0d5a1b59eeeb261aa26f223 not found: ID does not exist" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.890259 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jj6s"] Jan 30 08:56:09 crc kubenswrapper[4870]: E0130 08:56:09.891319 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="extract-content" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.891344 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="extract-content" Jan 30 08:56:09 crc kubenswrapper[4870]: E0130 08:56:09.891387 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.891400 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" Jan 30 08:56:09 crc kubenswrapper[4870]: E0130 08:56:09.891431 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="extract-utilities" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.891443 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="extract-utilities" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.891780 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" containerName="registry-server" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.894979 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.903556 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jj6s"] Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.932568 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-catalog-content\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.932711 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-utilities\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:09 crc kubenswrapper[4870]: I0130 08:56:09.932908 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7sw\" (UniqueName: \"kubernetes.io/projected/52b5a042-8bb0-474f-bc28-7d116341bf06-kube-api-access-wn7sw\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.039973 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7sw\" (UniqueName: \"kubernetes.io/projected/52b5a042-8bb0-474f-bc28-7d116341bf06-kube-api-access-wn7sw\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.040115 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-catalog-content\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.040238 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-utilities\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.040894 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-utilities\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.041510 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-catalog-content\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.061584 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7sw\" (UniqueName: \"kubernetes.io/projected/52b5a042-8bb0-474f-bc28-7d116341bf06-kube-api-access-wn7sw\") pod \"community-operators-6jj6s\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.110141 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8" path="/var/lib/kubelet/pods/b88d5fc4-6cdd-4231-97b2-6e9dda7a33b8/volumes" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.220552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:10 crc kubenswrapper[4870]: I0130 08:56:10.792290 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jj6s"] Jan 30 08:56:11 crc kubenswrapper[4870]: I0130 08:56:11.121502 4870 generic.go:334] "Generic (PLEG): container finished" podID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerID="9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a" exitCode=0 Jan 30 08:56:11 crc kubenswrapper[4870]: I0130 08:56:11.121551 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerDied","Data":"9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a"} Jan 30 08:56:11 crc kubenswrapper[4870]: I0130 08:56:11.121581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerStarted","Data":"1397bf99db3c0cbef19f957fbbdb2abd4d8aa60d8ee4d5dd3023a74c9e29c5cd"} Jan 30 08:56:12 crc kubenswrapper[4870]: I0130 08:56:12.131176 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerStarted","Data":"f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014"} Jan 30 08:56:13 crc kubenswrapper[4870]: E0130 08:56:13.900930 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b5a042_8bb0_474f_bc28_7d116341bf06.slice/crio-f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b5a042_8bb0_474f_bc28_7d116341bf06.slice/crio-conmon-f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:56:14 crc kubenswrapper[4870]: I0130 08:56:14.148807 4870 generic.go:334] "Generic (PLEG): container finished" podID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerID="f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014" exitCode=0 Jan 30 08:56:14 crc kubenswrapper[4870]: I0130 08:56:14.148903 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerDied","Data":"f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014"} Jan 30 08:56:15 crc kubenswrapper[4870]: I0130 08:56:15.160279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerStarted","Data":"3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b"} Jan 30 08:56:15 crc kubenswrapper[4870]: I0130 08:56:15.185408 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jj6s" podStartSLOduration=2.746153812 podStartE2EDuration="6.185385988s" podCreationTimestamp="2026-01-30 08:56:09 +0000 UTC" firstStartedPulling="2026-01-30 08:56:11.123665338 +0000 UTC m=+2809.819212447" lastFinishedPulling="2026-01-30 08:56:14.562897514 +0000 UTC m=+2813.258444623" observedRunningTime="2026-01-30 08:56:15.177554284 +0000 UTC m=+2813.873101393" watchObservedRunningTime="2026-01-30 08:56:15.185385988 +0000 UTC m=+2813.880933117" Jan 30 08:56:20 crc kubenswrapper[4870]: I0130 08:56:20.220682 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:20 crc kubenswrapper[4870]: I0130 08:56:20.221173 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:20 crc kubenswrapper[4870]: I0130 08:56:20.280574 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:21 crc kubenswrapper[4870]: I0130 08:56:21.291196 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:21 crc kubenswrapper[4870]: I0130 08:56:21.354836 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jj6s"] Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.245075 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6jj6s" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="registry-server" containerID="cri-o://3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b" gracePeriod=2 Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.742892 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.845862 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7sw\" (UniqueName: \"kubernetes.io/projected/52b5a042-8bb0-474f-bc28-7d116341bf06-kube-api-access-wn7sw\") pod \"52b5a042-8bb0-474f-bc28-7d116341bf06\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.846052 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-catalog-content\") pod \"52b5a042-8bb0-474f-bc28-7d116341bf06\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.846318 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-utilities\") pod \"52b5a042-8bb0-474f-bc28-7d116341bf06\" (UID: \"52b5a042-8bb0-474f-bc28-7d116341bf06\") " Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.847306 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-utilities" (OuterVolumeSpecName: "utilities") pod "52b5a042-8bb0-474f-bc28-7d116341bf06" (UID: "52b5a042-8bb0-474f-bc28-7d116341bf06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.865230 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b5a042-8bb0-474f-bc28-7d116341bf06-kube-api-access-wn7sw" (OuterVolumeSpecName: "kube-api-access-wn7sw") pod "52b5a042-8bb0-474f-bc28-7d116341bf06" (UID: "52b5a042-8bb0-474f-bc28-7d116341bf06"). InnerVolumeSpecName "kube-api-access-wn7sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.901405 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52b5a042-8bb0-474f-bc28-7d116341bf06" (UID: "52b5a042-8bb0-474f-bc28-7d116341bf06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.948606 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.948642 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b5a042-8bb0-474f-bc28-7d116341bf06-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:23 crc kubenswrapper[4870]: I0130 08:56:23.948656 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7sw\" (UniqueName: \"kubernetes.io/projected/52b5a042-8bb0-474f-bc28-7d116341bf06-kube-api-access-wn7sw\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:24 crc kubenswrapper[4870]: E0130 08:56:24.174989 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b5a042_8bb0_474f_bc28_7d116341bf06.slice\": RecentStats: unable to find data in memory cache]" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.257522 4870 generic.go:334] "Generic (PLEG): container finished" podID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerID="3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b" exitCode=0 Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.257618 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerDied","Data":"3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b"} Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.257821 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jj6s" event={"ID":"52b5a042-8bb0-474f-bc28-7d116341bf06","Type":"ContainerDied","Data":"1397bf99db3c0cbef19f957fbbdb2abd4d8aa60d8ee4d5dd3023a74c9e29c5cd"} Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.257849 4870 scope.go:117] "RemoveContainer" containerID="3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.257669 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jj6s" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.280628 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jj6s"] Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.289247 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6jj6s"] Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.291573 4870 scope.go:117] "RemoveContainer" containerID="f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.323600 4870 scope.go:117] "RemoveContainer" containerID="9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.365169 4870 scope.go:117] "RemoveContainer" containerID="3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b" Jan 30 08:56:24 crc kubenswrapper[4870]: E0130 08:56:24.365987 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b\": container with ID starting with 3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b not found: ID does not exist" containerID="3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.366037 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b"} err="failed to get container status \"3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b\": rpc error: code = NotFound desc = could not find container \"3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b\": container with ID starting with 3235f9fe80a91166263d30df27913fe34690c14c54edeacf1eb8d9e013e8060b not found: ID does not exist" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.366078 4870 scope.go:117] "RemoveContainer" containerID="f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014" Jan 30 08:56:24 crc kubenswrapper[4870]: E0130 08:56:24.366816 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014\": container with ID starting with f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014 not found: ID does not exist" containerID="f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.366860 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014"} err="failed to get container status \"f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014\": rpc error: code = NotFound desc = could not find container \"f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014\": container with ID starting with f880e1cfa9a280bc4a25e948a919389702e514b95714f1dee0c5caa6cc6ce014 not found: ID does not exist" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.366912 4870 scope.go:117] "RemoveContainer" containerID="9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a" Jan 30 08:56:24 crc kubenswrapper[4870]: E0130 08:56:24.367361 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a\": container with ID starting with 9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a not found: ID does not exist" containerID="9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a" Jan 30 08:56:24 crc kubenswrapper[4870]: I0130 08:56:24.367533 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a"} err="failed to get container status \"9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a\": rpc error: code = NotFound desc = could not find container \"9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a\": container with ID starting with 9534298d81270f4262590015fd5426c5e04ac73af7f2212e9942a942047e072a not found: ID does not exist" Jan 30 08:56:26 crc kubenswrapper[4870]: I0130 08:56:26.093646 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" path="/var/lib/kubelet/pods/52b5a042-8bb0-474f-bc28-7d116341bf06/volumes" Jan 30 08:56:26 crc kubenswrapper[4870]: I0130 08:56:26.839681 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:56:26 crc kubenswrapper[4870]: I0130 08:56:26.840238 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="prometheus" containerID="cri-o://ea102a1406731d57700d5196e250072c2053fa2345212a3d6975e629610cb94c" gracePeriod=600 Jan 30 08:56:26 crc kubenswrapper[4870]: I0130 08:56:26.840318 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="thanos-sidecar" containerID="cri-o://d46fd5e887baec843bdd4f9f0254772bf4dc50323e052cd052dd2ea4657b7397" gracePeriod=600 Jan 30 08:56:26 crc kubenswrapper[4870]: I0130 08:56:26.840379 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="config-reloader" containerID="cri-o://a9347a512a592b79cb85be5a5a664bfadec21fed65bd7eacf1a97eb008166eb1" gracePeriod=600 Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.292034 4870 generic.go:334] "Generic (PLEG): container finished" podID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerID="d46fd5e887baec843bdd4f9f0254772bf4dc50323e052cd052dd2ea4657b7397" exitCode=0 Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.292073 4870 generic.go:334] "Generic (PLEG): container finished" podID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerID="a9347a512a592b79cb85be5a5a664bfadec21fed65bd7eacf1a97eb008166eb1" exitCode=0 Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.292083 4870 generic.go:334] "Generic (PLEG): container finished" podID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerID="ea102a1406731d57700d5196e250072c2053fa2345212a3d6975e629610cb94c" exitCode=0 Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.292111 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerDied","Data":"d46fd5e887baec843bdd4f9f0254772bf4dc50323e052cd052dd2ea4657b7397"} Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.292142 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerDied","Data":"a9347a512a592b79cb85be5a5a664bfadec21fed65bd7eacf1a97eb008166eb1"} Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.292155 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerDied","Data":"ea102a1406731d57700d5196e250072c2053fa2345212a3d6975e629610cb94c"} Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.435426 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.135:9090/-/ready\": dial tcp 10.217.0.135:9090: connect: connection refused" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.860932 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.930082 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-secret-combined-ca-bundle\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.930356 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-0\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.930575 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config-out\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.930663 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-2\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.930803 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-1\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.931680 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.931789 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-tls-assets\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.931956 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.932044 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.932138 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.932211 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5j9q\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-kube-api-access-f5j9q\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.932304 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.932382 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-thanos-prometheus-http-client-file\") pod \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\" (UID: \"8c8b2056-4db2-489e-b1d1-b201e38e84c8\") " Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.932507 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.933019 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.933463 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.936784 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.941950 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.942691 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config" (OuterVolumeSpecName: "config") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.942869 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-kube-api-access-f5j9q" (OuterVolumeSpecName: "kube-api-access-f5j9q") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "kube-api-access-f5j9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.946767 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.948214 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.951730 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.961170 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config-out" (OuterVolumeSpecName: "config-out") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:27 crc kubenswrapper[4870]: I0130 08:56:27.963478 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.000493 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.035989 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036054 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") on node \"crc\" " Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036071 4870 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036085 4870 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036102 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036113 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5j9q\" (UniqueName: \"kubernetes.io/projected/8c8b2056-4db2-489e-b1d1-b201e38e84c8-kube-api-access-f5j9q\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036125 4870 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036137 4870 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036149 4870 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036160 4870 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8b2056-4db2-489e-b1d1-b201e38e84c8-config-out\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.036170 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8b2056-4db2-489e-b1d1-b201e38e84c8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.065098 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.065261 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1") on node "crc" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.100896 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config" (OuterVolumeSpecName: "web-config") pod "8c8b2056-4db2-489e-b1d1-b201e38e84c8" (UID: "8c8b2056-4db2-489e-b1d1-b201e38e84c8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.139995 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.140030 4870 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8b2056-4db2-489e-b1d1-b201e38e84c8-web-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.304920 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8b2056-4db2-489e-b1d1-b201e38e84c8","Type":"ContainerDied","Data":"290c2383e9eae83628bb57bb648be794756981366804a5738c5a44985dd7ad40"} Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.304977 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.304985 4870 scope.go:117] "RemoveContainer" containerID="d46fd5e887baec843bdd4f9f0254772bf4dc50323e052cd052dd2ea4657b7397" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.334076 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.335084 4870 scope.go:117] "RemoveContainer" containerID="a9347a512a592b79cb85be5a5a664bfadec21fed65bd7eacf1a97eb008166eb1" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.347532 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.359669 4870 scope.go:117] "RemoveContainer" containerID="ea102a1406731d57700d5196e250072c2053fa2345212a3d6975e629610cb94c" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.374863 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375297 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="config-reloader" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375314 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="config-reloader" Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375326 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="thanos-sidecar" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375331 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="thanos-sidecar" Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375341 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="prometheus" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375347 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="prometheus" Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375372 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="extract-content" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375377 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="extract-content" Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375392 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="registry-server" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375398 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="registry-server" Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375408 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="extract-utilities" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375414 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="extract-utilities" Jan 30 08:56:28 crc kubenswrapper[4870]: E0130 08:56:28.375427 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="init-config-reloader" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375433 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="init-config-reloader" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375796 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="prometheus" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375822 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="thanos-sidecar" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375838 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b5a042-8bb0-474f-bc28-7d116341bf06" containerName="registry-server" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.375855 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" containerName="config-reloader" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.383788 4870 scope.go:117] "RemoveContainer" containerID="93e4e1345741b60dca904480e4327da8f596dec2a8d2178c87fe6d5632a2daeb" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.384382 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.388555 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.390258 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.390556 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.391016 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.391087 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-88lql" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.391220 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.391235 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.402606 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.414592 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445182 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445236 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445279 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445319 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445353 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a4d5397-32f0-4cc0-919b-cf4ed004b797-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445521 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwrq\" (UniqueName: \"kubernetes.io/projected/1a4d5397-32f0-4cc0-919b-cf4ed004b797-kube-api-access-hrwrq\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445597 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445684 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445729 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445869 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-config\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445924 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.445953 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a4d5397-32f0-4cc0-919b-cf4ed004b797-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.547618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548273 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548326 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548370 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548400 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a4d5397-32f0-4cc0-919b-cf4ed004b797-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548457 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548563 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwrq\" (UniqueName: \"kubernetes.io/projected/1a4d5397-32f0-4cc0-919b-cf4ed004b797-kube-api-access-hrwrq\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548603 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548672 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548717 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548755 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-config\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548778 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.548801 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a4d5397-32f0-4cc0-919b-cf4ed004b797-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.549593 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.549593 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.549662 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1a4d5397-32f0-4cc0-919b-cf4ed004b797-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.553450 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.553496 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b608408b27cf3925c08af2a9b3a133a2b5eb87db3a290a5641371b0533b7f7d2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.554216 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.556140 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a4d5397-32f0-4cc0-919b-cf4ed004b797-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.558972 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-config\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.558997 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.559258 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a4d5397-32f0-4cc0-919b-cf4ed004b797-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.559479 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.562681 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.564024 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1a4d5397-32f0-4cc0-919b-cf4ed004b797-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.574250 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwrq\" (UniqueName: \"kubernetes.io/projected/1a4d5397-32f0-4cc0-919b-cf4ed004b797-kube-api-access-hrwrq\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.620641 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e78ca-f27b-4988-a731-fdbb92ea32f1\") pod \"prometheus-metric-storage-0\" (UID: \"1a4d5397-32f0-4cc0-919b-cf4ed004b797\") " pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:28 crc kubenswrapper[4870]: I0130 08:56:28.762437 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:29 crc kubenswrapper[4870]: I0130 08:56:29.309658 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 08:56:29 crc kubenswrapper[4870]: I0130 08:56:29.346586 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1a4d5397-32f0-4cc0-919b-cf4ed004b797","Type":"ContainerStarted","Data":"16cd24868de028fb744a58f52f28012a8226e6660a3e945e583d11350e5d9fa9"} Jan 30 08:56:30 crc kubenswrapper[4870]: I0130 08:56:30.087166 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8b2056-4db2-489e-b1d1-b201e38e84c8" path="/var/lib/kubelet/pods/8c8b2056-4db2-489e-b1d1-b201e38e84c8/volumes" Jan 30 08:56:33 crc kubenswrapper[4870]: I0130 08:56:33.385048 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1a4d5397-32f0-4cc0-919b-cf4ed004b797","Type":"ContainerStarted","Data":"9a9c2d1a2710347a06c38820d11f1b5b73a9bf551910bc03bc75d44b0d9cc52f"} Jan 30 08:56:41 crc kubenswrapper[4870]: I0130 08:56:41.461444 4870 generic.go:334] "Generic (PLEG): container finished" podID="1a4d5397-32f0-4cc0-919b-cf4ed004b797" containerID="9a9c2d1a2710347a06c38820d11f1b5b73a9bf551910bc03bc75d44b0d9cc52f" exitCode=0 Jan 30 08:56:41 crc kubenswrapper[4870]: I0130 08:56:41.461530 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1a4d5397-32f0-4cc0-919b-cf4ed004b797","Type":"ContainerDied","Data":"9a9c2d1a2710347a06c38820d11f1b5b73a9bf551910bc03bc75d44b0d9cc52f"} Jan 30 08:56:42 crc kubenswrapper[4870]: I0130 08:56:42.474977 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1a4d5397-32f0-4cc0-919b-cf4ed004b797","Type":"ContainerStarted","Data":"5e54e3c60810f07593c4f31dd1619612db1222c28b053ffcf6d3f53579eab58f"} Jan 30 08:56:45 crc kubenswrapper[4870]: I0130 08:56:45.513456 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1a4d5397-32f0-4cc0-919b-cf4ed004b797","Type":"ContainerStarted","Data":"260cd41ef24f5ba7b55ab1319b66e0603fed3f8b96c623eb70572465578ecd8f"} Jan 30 08:56:46 crc kubenswrapper[4870]: I0130 08:56:46.525368 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1a4d5397-32f0-4cc0-919b-cf4ed004b797","Type":"ContainerStarted","Data":"82a2bd08b9c97b2c84ab230c42a757060be2382a533996f15685f7dd8eda511d"} Jan 30 08:56:46 crc kubenswrapper[4870]: I0130 08:56:46.570171 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.570147523 podStartE2EDuration="18.570147523s" podCreationTimestamp="2026-01-30 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:56:46.550838119 +0000 UTC m=+2845.246385238" watchObservedRunningTime="2026-01-30 08:56:46.570147523 +0000 UTC m=+2845.265694642" Jan 30 08:56:48 crc kubenswrapper[4870]: I0130 08:56:48.762849 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:58 crc kubenswrapper[4870]: I0130 08:56:58.763406 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:58 crc kubenswrapper[4870]: I0130 08:56:58.772502 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 08:56:59 crc kubenswrapper[4870]: I0130 08:56:59.658018 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.562659 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.565185 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.567687 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.567904 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-w7v26" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.568200 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.569112 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.587278 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.676622 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-config-data\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.676665 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.676734 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bggh\" (UniqueName: \"kubernetes.io/projected/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-kube-api-access-8bggh\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.676892 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.676986 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.677085 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.677204 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.677407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.677458 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.779527 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.779604 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.779671 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.779755 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780166 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780382 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780627 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780707 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-config-data\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780736 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780823 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bggh\" (UniqueName: \"kubernetes.io/projected/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-kube-api-access-8bggh\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.780875 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.781149 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.782043 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-config-data\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.782057 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.792622 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.793047 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.796739 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.801328 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bggh\" (UniqueName: \"kubernetes.io/projected/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-kube-api-access-8bggh\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.823749 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " pod="openstack/tempest-tests-tempest" Jan 30 08:57:14 crc kubenswrapper[4870]: I0130 08:57:14.884851 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 08:57:15 crc kubenswrapper[4870]: I0130 08:57:15.395319 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 08:57:15 crc kubenswrapper[4870]: I0130 08:57:15.799363 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a","Type":"ContainerStarted","Data":"1c881927627a156ba1416d85da9f209f5ec355b05e5dce2ac4e41aa800f2573b"} Jan 30 08:57:25 crc kubenswrapper[4870]: I0130 08:57:25.249732 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:57:25 crc kubenswrapper[4870]: I0130 08:57:25.251036 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:57:26 crc kubenswrapper[4870]: I0130 08:57:26.297923 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 08:57:27 crc kubenswrapper[4870]: I0130 08:57:27.933210 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a","Type":"ContainerStarted","Data":"e510327daa135710d56632aefcbd974a031585074a72c0b411cbaf1ee33eb7a9"} Jan 30 08:57:27 crc kubenswrapper[4870]: I0130 08:57:27.957480 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.061621058 podStartE2EDuration="14.957463224s" podCreationTimestamp="2026-01-30 08:57:13 +0000 UTC" firstStartedPulling="2026-01-30 08:57:15.399335637 +0000 UTC m=+2874.094882756" lastFinishedPulling="2026-01-30 08:57:26.295177813 +0000 UTC m=+2884.990724922" observedRunningTime="2026-01-30 08:57:27.946890494 +0000 UTC m=+2886.642437603" watchObservedRunningTime="2026-01-30 08:57:27.957463224 +0000 UTC m=+2886.653010333" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.808658 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk8db"] Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.814484 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.834254 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk8db"] Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.896428 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-catalog-content\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.896507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-utilities\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.896585 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmq5v\" (UniqueName: \"kubernetes.io/projected/5a375fc2-49c4-42c7-a029-34fde5c159cf-kube-api-access-gmq5v\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.999010 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-utilities\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.999431 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmq5v\" (UniqueName: \"kubernetes.io/projected/5a375fc2-49c4-42c7-a029-34fde5c159cf-kube-api-access-gmq5v\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.999544 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-utilities\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:43 crc kubenswrapper[4870]: I0130 08:57:43.999574 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-catalog-content\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:44 crc kubenswrapper[4870]: I0130 08:57:43.999805 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-catalog-content\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:44 crc kubenswrapper[4870]: I0130 08:57:44.019235 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmq5v\" (UniqueName: \"kubernetes.io/projected/5a375fc2-49c4-42c7-a029-34fde5c159cf-kube-api-access-gmq5v\") pod \"certified-operators-dk8db\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:44 crc kubenswrapper[4870]: I0130 08:57:44.142457 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:44 crc kubenswrapper[4870]: I0130 08:57:44.658011 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk8db"] Jan 30 08:57:44 crc kubenswrapper[4870]: W0130 08:57:44.659194 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a375fc2_49c4_42c7_a029_34fde5c159cf.slice/crio-8bc1dd9b3b99058b263a44cf32457ac4dd41def79ff1d253204239fb663b9df3 WatchSource:0}: Error finding container 8bc1dd9b3b99058b263a44cf32457ac4dd41def79ff1d253204239fb663b9df3: Status 404 returned error can't find the container with id 8bc1dd9b3b99058b263a44cf32457ac4dd41def79ff1d253204239fb663b9df3 Jan 30 08:57:45 crc kubenswrapper[4870]: I0130 08:57:45.591283 4870 generic.go:334] "Generic (PLEG): container finished" podID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerID="fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231" exitCode=0 Jan 30 08:57:45 crc kubenswrapper[4870]: I0130 08:57:45.591355 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerDied","Data":"fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231"} Jan 30 08:57:45 crc kubenswrapper[4870]: I0130 08:57:45.591564 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerStarted","Data":"8bc1dd9b3b99058b263a44cf32457ac4dd41def79ff1d253204239fb663b9df3"} Jan 30 08:57:46 crc kubenswrapper[4870]: I0130 08:57:46.602652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerStarted","Data":"0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f"} Jan 30 08:57:48 crc kubenswrapper[4870]: I0130 08:57:48.626708 4870 generic.go:334] "Generic (PLEG): container finished" podID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerID="0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f" exitCode=0 Jan 30 08:57:48 crc kubenswrapper[4870]: I0130 08:57:48.626823 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerDied","Data":"0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f"} Jan 30 08:57:48 crc kubenswrapper[4870]: I0130 08:57:48.629853 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:57:49 crc kubenswrapper[4870]: I0130 08:57:49.638445 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerStarted","Data":"4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61"} Jan 30 08:57:49 crc kubenswrapper[4870]: I0130 08:57:49.665679 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk8db" podStartSLOduration=3.266985549 podStartE2EDuration="6.665653427s" podCreationTimestamp="2026-01-30 08:57:43 +0000 UTC" firstStartedPulling="2026-01-30 08:57:45.593283024 +0000 UTC m=+2904.288830173" lastFinishedPulling="2026-01-30 08:57:48.991950942 +0000 UTC m=+2907.687498051" observedRunningTime="2026-01-30 08:57:49.657733329 +0000 UTC m=+2908.353280438" watchObservedRunningTime="2026-01-30 08:57:49.665653427 +0000 UTC m=+2908.361200536" Jan 30 08:57:54 crc kubenswrapper[4870]: I0130 08:57:54.142714 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:54 crc kubenswrapper[4870]: I0130 08:57:54.143331 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:54 crc kubenswrapper[4870]: I0130 08:57:54.202370 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:54 crc kubenswrapper[4870]: I0130 08:57:54.742041 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:54 crc kubenswrapper[4870]: I0130 08:57:54.819809 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk8db"] Jan 30 08:57:55 crc kubenswrapper[4870]: I0130 08:57:55.249402 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:57:55 crc kubenswrapper[4870]: I0130 08:57:55.249732 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:57:56 crc kubenswrapper[4870]: I0130 08:57:56.700453 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk8db" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="registry-server" containerID="cri-o://4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61" gracePeriod=2 Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.219771 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.413616 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-catalog-content\") pod \"5a375fc2-49c4-42c7-a029-34fde5c159cf\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.413701 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmq5v\" (UniqueName: \"kubernetes.io/projected/5a375fc2-49c4-42c7-a029-34fde5c159cf-kube-api-access-gmq5v\") pod \"5a375fc2-49c4-42c7-a029-34fde5c159cf\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.413775 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-utilities\") pod \"5a375fc2-49c4-42c7-a029-34fde5c159cf\" (UID: \"5a375fc2-49c4-42c7-a029-34fde5c159cf\") " Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.415011 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-utilities" (OuterVolumeSpecName: "utilities") pod "5a375fc2-49c4-42c7-a029-34fde5c159cf" (UID: "5a375fc2-49c4-42c7-a029-34fde5c159cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.425104 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a375fc2-49c4-42c7-a029-34fde5c159cf-kube-api-access-gmq5v" (OuterVolumeSpecName: "kube-api-access-gmq5v") pod "5a375fc2-49c4-42c7-a029-34fde5c159cf" (UID: "5a375fc2-49c4-42c7-a029-34fde5c159cf"). InnerVolumeSpecName "kube-api-access-gmq5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.497067 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a375fc2-49c4-42c7-a029-34fde5c159cf" (UID: "5a375fc2-49c4-42c7-a029-34fde5c159cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.517025 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.517072 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a375fc2-49c4-42c7-a029-34fde5c159cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.517086 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmq5v\" (UniqueName: \"kubernetes.io/projected/5a375fc2-49c4-42c7-a029-34fde5c159cf-kube-api-access-gmq5v\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.715629 4870 generic.go:334] "Generic (PLEG): container finished" podID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerID="4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61" exitCode=0 Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.715689 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk8db" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.715714 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerDied","Data":"4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61"} Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.716022 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk8db" event={"ID":"5a375fc2-49c4-42c7-a029-34fde5c159cf","Type":"ContainerDied","Data":"8bc1dd9b3b99058b263a44cf32457ac4dd41def79ff1d253204239fb663b9df3"} Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.716043 4870 scope.go:117] "RemoveContainer" containerID="4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.751261 4870 scope.go:117] "RemoveContainer" containerID="0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.753558 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk8db"] Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.771739 4870 scope.go:117] "RemoveContainer" containerID="fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.772128 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk8db"] Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.834429 4870 scope.go:117] "RemoveContainer" containerID="4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61" Jan 30 08:57:57 crc kubenswrapper[4870]: E0130 08:57:57.835083 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61\": container with ID starting with 4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61 not found: ID does not exist" containerID="4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.835139 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61"} err="failed to get container status \"4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61\": rpc error: code = NotFound desc = could not find container \"4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61\": container with ID starting with 4ddb7739156f016ea120f6a0b74e693c401cad10f668a738e8930df3886c0c61 not found: ID does not exist" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.835170 4870 scope.go:117] "RemoveContainer" containerID="0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f" Jan 30 08:57:57 crc kubenswrapper[4870]: E0130 08:57:57.835679 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f\": container with ID starting with 0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f not found: ID does not exist" containerID="0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.835729 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f"} err="failed to get container status \"0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f\": rpc error: code = NotFound desc = could not find container \"0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f\": container with ID starting with 0dee4777b76a82b1fdc63504969da64f34a5856c1db03793c852224b9b42f73f not found: ID does not exist" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.835764 4870 scope.go:117] "RemoveContainer" containerID="fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231" Jan 30 08:57:57 crc kubenswrapper[4870]: E0130 08:57:57.836192 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231\": container with ID starting with fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231 not found: ID does not exist" containerID="fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231" Jan 30 08:57:57 crc kubenswrapper[4870]: I0130 08:57:57.836215 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231"} err="failed to get container status \"fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231\": rpc error: code = NotFound desc = could not find container \"fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231\": container with ID starting with fa7c57f7496863d99ccce13e5e7a680e7f7c52a9ee29c22ae217f26d0ec42231 not found: ID does not exist" Jan 30 08:57:58 crc kubenswrapper[4870]: I0130 08:57:58.087372 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" path="/var/lib/kubelet/pods/5a375fc2-49c4-42c7-a029-34fde5c159cf/volumes" Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.249291 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.249786 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.249841 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.250721 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.250790 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" gracePeriod=600 Jan 30 08:58:25 crc kubenswrapper[4870]: E0130 08:58:25.383094 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.996726 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" exitCode=0 Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.996780 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283"} Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.996841 4870 scope.go:117] "RemoveContainer" containerID="9db7902fb455898f2a67824f5d2bac1880accc6a6ce6fbe42d5af520837903b4" Jan 30 08:58:25 crc kubenswrapper[4870]: I0130 08:58:25.997823 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:58:25 crc kubenswrapper[4870]: E0130 08:58:25.998519 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:58:39 crc kubenswrapper[4870]: I0130 08:58:39.075208 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:58:39 crc kubenswrapper[4870]: E0130 08:58:39.076327 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:58:50 crc kubenswrapper[4870]: I0130 08:58:50.074903 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:58:50 crc kubenswrapper[4870]: E0130 08:58:50.075965 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:59:03 crc kubenswrapper[4870]: I0130 08:59:03.075381 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:59:03 crc kubenswrapper[4870]: E0130 08:59:03.076272 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:59:17 crc kubenswrapper[4870]: I0130 08:59:17.075621 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:59:17 crc kubenswrapper[4870]: E0130 08:59:17.076807 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:59:30 crc kubenswrapper[4870]: I0130 08:59:30.074898 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:59:30 crc kubenswrapper[4870]: E0130 08:59:30.075676 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:59:41 crc kubenswrapper[4870]: I0130 08:59:41.075469 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:59:41 crc kubenswrapper[4870]: E0130 08:59:41.076768 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 08:59:53 crc kubenswrapper[4870]: I0130 08:59:53.075146 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 08:59:53 crc kubenswrapper[4870]: E0130 08:59:53.076200 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.151056 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb"] Jan 30 09:00:00 crc kubenswrapper[4870]: E0130 09:00:00.152166 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="extract-utilities" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.152186 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="extract-utilities" Jan 30 09:00:00 crc kubenswrapper[4870]: E0130 09:00:00.152214 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="registry-server" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.152222 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="registry-server" Jan 30 09:00:00 crc kubenswrapper[4870]: E0130 09:00:00.152248 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="extract-content" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.152257 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="extract-content" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.152505 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a375fc2-49c4-42c7-a029-34fde5c159cf" containerName="registry-server" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.155010 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.161289 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.161289 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.182633 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb"] Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.218216 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f537a705-b98d-4cc1-8fba-f9fb4145fc33-config-volume\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.218348 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7jx\" (UniqueName: \"kubernetes.io/projected/f537a705-b98d-4cc1-8fba-f9fb4145fc33-kube-api-access-xp7jx\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.218499 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f537a705-b98d-4cc1-8fba-f9fb4145fc33-secret-volume\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.320593 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f537a705-b98d-4cc1-8fba-f9fb4145fc33-config-volume\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.320657 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7jx\" (UniqueName: \"kubernetes.io/projected/f537a705-b98d-4cc1-8fba-f9fb4145fc33-kube-api-access-xp7jx\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.320779 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f537a705-b98d-4cc1-8fba-f9fb4145fc33-secret-volume\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.321560 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f537a705-b98d-4cc1-8fba-f9fb4145fc33-config-volume\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.328057 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f537a705-b98d-4cc1-8fba-f9fb4145fc33-secret-volume\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.342576 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7jx\" (UniqueName: \"kubernetes.io/projected/f537a705-b98d-4cc1-8fba-f9fb4145fc33-kube-api-access-xp7jx\") pod \"collect-profiles-29496060-gq9mb\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.482226 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.924424 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb"] Jan 30 09:00:00 crc kubenswrapper[4870]: I0130 09:00:00.954553 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" event={"ID":"f537a705-b98d-4cc1-8fba-f9fb4145fc33","Type":"ContainerStarted","Data":"8d7b9cb36aff363a87a4e3e13e6a4a3eb2da89546a6ff8ab278af0d598dd103b"} Jan 30 09:00:01 crc kubenswrapper[4870]: I0130 09:00:01.965579 4870 generic.go:334] "Generic (PLEG): container finished" podID="f537a705-b98d-4cc1-8fba-f9fb4145fc33" containerID="21570fbd391aa6805bfee83f36df9ca917daf03782908d47cd7dd4eedf90e176" exitCode=0 Jan 30 09:00:01 crc kubenswrapper[4870]: I0130 09:00:01.965744 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" event={"ID":"f537a705-b98d-4cc1-8fba-f9fb4145fc33","Type":"ContainerDied","Data":"21570fbd391aa6805bfee83f36df9ca917daf03782908d47cd7dd4eedf90e176"} Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.367269 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.390545 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f537a705-b98d-4cc1-8fba-f9fb4145fc33-secret-volume\") pod \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.390584 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f537a705-b98d-4cc1-8fba-f9fb4145fc33-config-volume\") pod \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.390698 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7jx\" (UniqueName: \"kubernetes.io/projected/f537a705-b98d-4cc1-8fba-f9fb4145fc33-kube-api-access-xp7jx\") pod \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\" (UID: \"f537a705-b98d-4cc1-8fba-f9fb4145fc33\") " Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.391456 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f537a705-b98d-4cc1-8fba-f9fb4145fc33-config-volume" (OuterVolumeSpecName: "config-volume") pod "f537a705-b98d-4cc1-8fba-f9fb4145fc33" (UID: "f537a705-b98d-4cc1-8fba-f9fb4145fc33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.395996 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f537a705-b98d-4cc1-8fba-f9fb4145fc33-kube-api-access-xp7jx" (OuterVolumeSpecName: "kube-api-access-xp7jx") pod "f537a705-b98d-4cc1-8fba-f9fb4145fc33" (UID: "f537a705-b98d-4cc1-8fba-f9fb4145fc33"). InnerVolumeSpecName "kube-api-access-xp7jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.404020 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f537a705-b98d-4cc1-8fba-f9fb4145fc33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f537a705-b98d-4cc1-8fba-f9fb4145fc33" (UID: "f537a705-b98d-4cc1-8fba-f9fb4145fc33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.492995 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f537a705-b98d-4cc1-8fba-f9fb4145fc33-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.493031 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f537a705-b98d-4cc1-8fba-f9fb4145fc33-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.493042 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7jx\" (UniqueName: \"kubernetes.io/projected/f537a705-b98d-4cc1-8fba-f9fb4145fc33-kube-api-access-xp7jx\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.989292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" event={"ID":"f537a705-b98d-4cc1-8fba-f9fb4145fc33","Type":"ContainerDied","Data":"8d7b9cb36aff363a87a4e3e13e6a4a3eb2da89546a6ff8ab278af0d598dd103b"} Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.989673 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d7b9cb36aff363a87a4e3e13e6a4a3eb2da89546a6ff8ab278af0d598dd103b" Jan 30 09:00:03 crc kubenswrapper[4870]: I0130 09:00:03.989417 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb" Jan 30 09:00:04 crc kubenswrapper[4870]: I0130 09:00:04.457011 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc"] Jan 30 09:00:04 crc kubenswrapper[4870]: I0130 09:00:04.473073 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-h7vrc"] Jan 30 09:00:06 crc kubenswrapper[4870]: I0130 09:00:06.096313 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ecc6b5-f0f3-40b1-ba86-24eabdbdc409" path="/var/lib/kubelet/pods/84ecc6b5-f0f3-40b1-ba86-24eabdbdc409/volumes" Jan 30 09:00:08 crc kubenswrapper[4870]: I0130 09:00:08.074742 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:00:08 crc kubenswrapper[4870]: E0130 09:00:08.075725 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:00:20 crc kubenswrapper[4870]: I0130 09:00:20.075509 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:00:20 crc kubenswrapper[4870]: E0130 09:00:20.077679 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:00:33 crc kubenswrapper[4870]: I0130 09:00:33.074695 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:00:33 crc kubenswrapper[4870]: E0130 09:00:33.075520 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:00:38 crc kubenswrapper[4870]: I0130 09:00:38.299242 4870 scope.go:117] "RemoveContainer" containerID="906fa4603bfe71976f941c25c726c6a5f3b1b9c0bede621580c2910f359fd6f2" Jan 30 09:00:44 crc kubenswrapper[4870]: I0130 09:00:44.075326 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:00:44 crc kubenswrapper[4870]: E0130 09:00:44.076277 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:00:58 crc kubenswrapper[4870]: I0130 09:00:58.075159 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:00:58 crc kubenswrapper[4870]: E0130 09:00:58.076418 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.167872 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496061-tjh7b"] Jan 30 09:01:00 crc kubenswrapper[4870]: E0130 09:01:00.168639 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f537a705-b98d-4cc1-8fba-f9fb4145fc33" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.168652 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f537a705-b98d-4cc1-8fba-f9fb4145fc33" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.168861 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f537a705-b98d-4cc1-8fba-f9fb4145fc33" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.170037 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.189058 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496061-tjh7b"] Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.318361 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2p4\" (UniqueName: \"kubernetes.io/projected/43a9af69-f9ef-444e-8505-ccf1eac1a036-kube-api-access-5s2p4\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.318419 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-fernet-keys\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.318488 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-combined-ca-bundle\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.318515 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-config-data\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.420978 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2p4\" (UniqueName: \"kubernetes.io/projected/43a9af69-f9ef-444e-8505-ccf1eac1a036-kube-api-access-5s2p4\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.421046 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-fernet-keys\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.421106 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-combined-ca-bundle\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.421127 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-config-data\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.429657 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-fernet-keys\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.434869 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-combined-ca-bundle\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.436589 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-config-data\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.445000 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2p4\" (UniqueName: \"kubernetes.io/projected/43a9af69-f9ef-444e-8505-ccf1eac1a036-kube-api-access-5s2p4\") pod \"keystone-cron-29496061-tjh7b\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:00 crc kubenswrapper[4870]: I0130 09:01:00.501439 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:01 crc kubenswrapper[4870]: I0130 09:01:01.017250 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496061-tjh7b"] Jan 30 09:01:01 crc kubenswrapper[4870]: I0130 09:01:01.492977 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-tjh7b" event={"ID":"43a9af69-f9ef-444e-8505-ccf1eac1a036","Type":"ContainerStarted","Data":"015be41b58e089227cb06e61cbafc7f719a04446c8960392bdc84dfdeaa2514b"} Jan 30 09:01:02 crc kubenswrapper[4870]: I0130 09:01:02.502239 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-tjh7b" event={"ID":"43a9af69-f9ef-444e-8505-ccf1eac1a036","Type":"ContainerStarted","Data":"b6529962890f5d75e098916eb17988c45823be9141cadf7c34ba6541efc047f6"} Jan 30 09:01:02 crc kubenswrapper[4870]: I0130 09:01:02.529036 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496061-tjh7b" podStartSLOduration=2.529010326 podStartE2EDuration="2.529010326s" podCreationTimestamp="2026-01-30 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:01:02.51599526 +0000 UTC m=+3101.211542369" watchObservedRunningTime="2026-01-30 09:01:02.529010326 +0000 UTC m=+3101.224557435" Jan 30 09:01:06 crc kubenswrapper[4870]: I0130 09:01:06.539382 4870 generic.go:334] "Generic (PLEG): container finished" podID="43a9af69-f9ef-444e-8505-ccf1eac1a036" containerID="b6529962890f5d75e098916eb17988c45823be9141cadf7c34ba6541efc047f6" exitCode=0 Jan 30 09:01:06 crc kubenswrapper[4870]: I0130 09:01:06.539481 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-tjh7b" event={"ID":"43a9af69-f9ef-444e-8505-ccf1eac1a036","Type":"ContainerDied","Data":"b6529962890f5d75e098916eb17988c45823be9141cadf7c34ba6541efc047f6"} Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.003835 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.110991 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-combined-ca-bundle\") pod \"43a9af69-f9ef-444e-8505-ccf1eac1a036\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.111191 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-config-data\") pod \"43a9af69-f9ef-444e-8505-ccf1eac1a036\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.111324 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s2p4\" (UniqueName: \"kubernetes.io/projected/43a9af69-f9ef-444e-8505-ccf1eac1a036-kube-api-access-5s2p4\") pod \"43a9af69-f9ef-444e-8505-ccf1eac1a036\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.111382 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-fernet-keys\") pod \"43a9af69-f9ef-444e-8505-ccf1eac1a036\" (UID: \"43a9af69-f9ef-444e-8505-ccf1eac1a036\") " Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.118398 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "43a9af69-f9ef-444e-8505-ccf1eac1a036" (UID: "43a9af69-f9ef-444e-8505-ccf1eac1a036"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.119317 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a9af69-f9ef-444e-8505-ccf1eac1a036-kube-api-access-5s2p4" (OuterVolumeSpecName: "kube-api-access-5s2p4") pod "43a9af69-f9ef-444e-8505-ccf1eac1a036" (UID: "43a9af69-f9ef-444e-8505-ccf1eac1a036"). InnerVolumeSpecName "kube-api-access-5s2p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.145795 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43a9af69-f9ef-444e-8505-ccf1eac1a036" (UID: "43a9af69-f9ef-444e-8505-ccf1eac1a036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.178923 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-config-data" (OuterVolumeSpecName: "config-data") pod "43a9af69-f9ef-444e-8505-ccf1eac1a036" (UID: "43a9af69-f9ef-444e-8505-ccf1eac1a036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.213685 4870 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.213724 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.213737 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a9af69-f9ef-444e-8505-ccf1eac1a036-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.213748 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s2p4\" (UniqueName: \"kubernetes.io/projected/43a9af69-f9ef-444e-8505-ccf1eac1a036-kube-api-access-5s2p4\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.560535 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-tjh7b" event={"ID":"43a9af69-f9ef-444e-8505-ccf1eac1a036","Type":"ContainerDied","Data":"015be41b58e089227cb06e61cbafc7f719a04446c8960392bdc84dfdeaa2514b"} Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.560592 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-tjh7b" Jan 30 09:01:08 crc kubenswrapper[4870]: I0130 09:01:08.560600 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="015be41b58e089227cb06e61cbafc7f719a04446c8960392bdc84dfdeaa2514b" Jan 30 09:01:09 crc kubenswrapper[4870]: I0130 09:01:09.076369 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:01:09 crc kubenswrapper[4870]: E0130 09:01:09.076688 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.842929 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9wkl4"] Jan 30 09:01:15 crc kubenswrapper[4870]: E0130 09:01:15.843868 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a9af69-f9ef-444e-8505-ccf1eac1a036" containerName="keystone-cron" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.843954 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a9af69-f9ef-444e-8505-ccf1eac1a036" containerName="keystone-cron" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.844297 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a9af69-f9ef-444e-8505-ccf1eac1a036" containerName="keystone-cron" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.846141 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.856743 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wkl4"] Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.994739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-catalog-content\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.994825 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-utilities\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:15 crc kubenswrapper[4870]: I0130 09:01:15.995148 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqjf\" (UniqueName: \"kubernetes.io/projected/a3785ae9-ea5a-4e63-99b5-e2f370f32739-kube-api-access-5kqjf\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.096870 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-catalog-content\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.096939 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-utilities\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.097051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kqjf\" (UniqueName: \"kubernetes.io/projected/a3785ae9-ea5a-4e63-99b5-e2f370f32739-kube-api-access-5kqjf\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.097358 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-catalog-content\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.097432 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-utilities\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.129024 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kqjf\" (UniqueName: \"kubernetes.io/projected/a3785ae9-ea5a-4e63-99b5-e2f370f32739-kube-api-access-5kqjf\") pod \"redhat-marketplace-9wkl4\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.186992 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:16 crc kubenswrapper[4870]: I0130 09:01:16.703911 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wkl4"] Jan 30 09:01:17 crc kubenswrapper[4870]: I0130 09:01:17.650484 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerID="2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74" exitCode=0 Jan 30 09:01:17 crc kubenswrapper[4870]: I0130 09:01:17.650581 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerDied","Data":"2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74"} Jan 30 09:01:17 crc kubenswrapper[4870]: I0130 09:01:17.650844 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerStarted","Data":"4f6eed6a15d8474e632789a235f8b8fe26a34b14f77f04f0be67129c66a15005"} Jan 30 09:01:20 crc kubenswrapper[4870]: I0130 09:01:20.681934 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerStarted","Data":"7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae"} Jan 30 09:01:21 crc kubenswrapper[4870]: I0130 09:01:21.074852 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:01:21 crc kubenswrapper[4870]: E0130 09:01:21.075196 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:01:23 crc kubenswrapper[4870]: I0130 09:01:23.712418 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerID="7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae" exitCode=0 Jan 30 09:01:23 crc kubenswrapper[4870]: I0130 09:01:23.712499 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerDied","Data":"7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae"} Jan 30 09:01:25 crc kubenswrapper[4870]: I0130 09:01:25.738616 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerStarted","Data":"15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1"} Jan 30 09:01:25 crc kubenswrapper[4870]: I0130 09:01:25.770864 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9wkl4" podStartSLOduration=3.766599748 podStartE2EDuration="10.77084499s" podCreationTimestamp="2026-01-30 09:01:15 +0000 UTC" firstStartedPulling="2026-01-30 09:01:17.652059515 +0000 UTC m=+3116.347606624" lastFinishedPulling="2026-01-30 09:01:24.656304757 +0000 UTC m=+3123.351851866" observedRunningTime="2026-01-30 09:01:25.759517986 +0000 UTC m=+3124.455065105" watchObservedRunningTime="2026-01-30 09:01:25.77084499 +0000 UTC m=+3124.466392099" Jan 30 09:01:26 crc kubenswrapper[4870]: I0130 09:01:26.188075 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:26 crc kubenswrapper[4870]: I0130 09:01:26.188204 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:27 crc kubenswrapper[4870]: I0130 09:01:27.232800 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9wkl4" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="registry-server" probeResult="failure" output=< Jan 30 09:01:27 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 09:01:27 crc kubenswrapper[4870]: > Jan 30 09:01:34 crc kubenswrapper[4870]: I0130 09:01:34.075214 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:01:34 crc kubenswrapper[4870]: E0130 09:01:34.076169 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:01:36 crc kubenswrapper[4870]: I0130 09:01:36.237328 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:36 crc kubenswrapper[4870]: I0130 09:01:36.286535 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:36 crc kubenswrapper[4870]: I0130 09:01:36.473647 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wkl4"] Jan 30 09:01:37 crc kubenswrapper[4870]: I0130 09:01:37.851989 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9wkl4" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="registry-server" containerID="cri-o://15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1" gracePeriod=2 Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.331563 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.475353 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kqjf\" (UniqueName: \"kubernetes.io/projected/a3785ae9-ea5a-4e63-99b5-e2f370f32739-kube-api-access-5kqjf\") pod \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.475628 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-utilities\") pod \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.475714 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-catalog-content\") pod \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\" (UID: \"a3785ae9-ea5a-4e63-99b5-e2f370f32739\") " Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.478929 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-utilities" (OuterVolumeSpecName: "utilities") pod "a3785ae9-ea5a-4e63-99b5-e2f370f32739" (UID: "a3785ae9-ea5a-4e63-99b5-e2f370f32739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.485131 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3785ae9-ea5a-4e63-99b5-e2f370f32739-kube-api-access-5kqjf" (OuterVolumeSpecName: "kube-api-access-5kqjf") pod "a3785ae9-ea5a-4e63-99b5-e2f370f32739" (UID: "a3785ae9-ea5a-4e63-99b5-e2f370f32739"). InnerVolumeSpecName "kube-api-access-5kqjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.501140 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3785ae9-ea5a-4e63-99b5-e2f370f32739" (UID: "a3785ae9-ea5a-4e63-99b5-e2f370f32739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.578056 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.578093 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kqjf\" (UniqueName: \"kubernetes.io/projected/a3785ae9-ea5a-4e63-99b5-e2f370f32739-kube-api-access-5kqjf\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.578105 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3785ae9-ea5a-4e63-99b5-e2f370f32739-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.862272 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerID="15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1" exitCode=0 Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.862351 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerDied","Data":"15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1"} Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.862361 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wkl4" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.862688 4870 scope.go:117] "RemoveContainer" containerID="15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.862667 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wkl4" event={"ID":"a3785ae9-ea5a-4e63-99b5-e2f370f32739","Type":"ContainerDied","Data":"4f6eed6a15d8474e632789a235f8b8fe26a34b14f77f04f0be67129c66a15005"} Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.892571 4870 scope.go:117] "RemoveContainer" containerID="7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.909607 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wkl4"] Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.918446 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wkl4"] Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.922074 4870 scope.go:117] "RemoveContainer" containerID="2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.995234 4870 scope.go:117] "RemoveContainer" containerID="15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1" Jan 30 09:01:38 crc kubenswrapper[4870]: E0130 09:01:38.995704 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1\": container with ID starting with 15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1 not found: ID does not exist" containerID="15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.995754 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1"} err="failed to get container status \"15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1\": rpc error: code = NotFound desc = could not find container \"15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1\": container with ID starting with 15526d903032abe561dc01c511ea0d161fcbbc3cf7983a924b78c8907d98e7d1 not found: ID does not exist" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.995784 4870 scope.go:117] "RemoveContainer" containerID="7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae" Jan 30 09:01:38 crc kubenswrapper[4870]: E0130 09:01:38.996211 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae\": container with ID starting with 7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae not found: ID does not exist" containerID="7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.996302 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae"} err="failed to get container status \"7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae\": rpc error: code = NotFound desc = could not find container \"7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae\": container with ID starting with 7e3339d6ec35a3b27f4157c7f3cf254ce64e449a9490f4aed5dffbc96defaeae not found: ID does not exist" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.996366 4870 scope.go:117] "RemoveContainer" containerID="2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74" Jan 30 09:01:38 crc kubenswrapper[4870]: E0130 09:01:38.996648 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74\": container with ID starting with 2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74 not found: ID does not exist" containerID="2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74" Jan 30 09:01:38 crc kubenswrapper[4870]: I0130 09:01:38.996719 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74"} err="failed to get container status \"2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74\": rpc error: code = NotFound desc = could not find container \"2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74\": container with ID starting with 2c5fa9920d90bab0f1d5ce6bf20404df2d8adf83c6aa4e90b8ddd69a895a4a74 not found: ID does not exist" Jan 30 09:01:40 crc kubenswrapper[4870]: I0130 09:01:40.089696 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" path="/var/lib/kubelet/pods/a3785ae9-ea5a-4e63-99b5-e2f370f32739/volumes" Jan 30 09:01:45 crc kubenswrapper[4870]: I0130 09:01:45.076064 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:01:45 crc kubenswrapper[4870]: E0130 09:01:45.077006 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:01:58 crc kubenswrapper[4870]: I0130 09:01:58.075552 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:01:58 crc kubenswrapper[4870]: E0130 09:01:58.076331 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:02:11 crc kubenswrapper[4870]: I0130 09:02:11.075179 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:02:11 crc kubenswrapper[4870]: E0130 09:02:11.076064 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:02:23 crc kubenswrapper[4870]: I0130 09:02:23.075111 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:02:23 crc kubenswrapper[4870]: E0130 09:02:23.075918 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:02:34 crc kubenswrapper[4870]: I0130 09:02:34.075236 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:02:34 crc kubenswrapper[4870]: E0130 09:02:34.077070 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:02:49 crc kubenswrapper[4870]: I0130 09:02:49.075312 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:02:49 crc kubenswrapper[4870]: E0130 09:02:49.076665 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:03:01 crc kubenswrapper[4870]: I0130 09:03:01.074900 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:03:01 crc kubenswrapper[4870]: E0130 09:03:01.075587 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:03:16 crc kubenswrapper[4870]: I0130 09:03:16.074772 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:03:16 crc kubenswrapper[4870]: E0130 09:03:16.075701 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:03:28 crc kubenswrapper[4870]: I0130 09:03:28.075587 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:03:28 crc kubenswrapper[4870]: I0130 09:03:28.878709 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"17820b765eafb63916466378e99044acfb43003b6255354ee74c2b5d6f218271"} Jan 30 09:05:55 crc kubenswrapper[4870]: I0130 09:05:55.250109 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:05:55 crc kubenswrapper[4870]: I0130 09:05:55.250570 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.013758 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-snfnh"] Jan 30 09:06:18 crc kubenswrapper[4870]: E0130 09:06:18.015368 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="extract-utilities" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.015405 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="extract-utilities" Jan 30 09:06:18 crc kubenswrapper[4870]: E0130 09:06:18.015446 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="extract-content" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.015462 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="extract-content" Jan 30 09:06:18 crc kubenswrapper[4870]: E0130 09:06:18.015515 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="registry-server" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.015530 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="registry-server" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.016015 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3785ae9-ea5a-4e63-99b5-e2f370f32739" containerName="registry-server" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.019258 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.024469 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-snfnh"] Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.146592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-utilities\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.146737 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdqq\" (UniqueName: \"kubernetes.io/projected/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-kube-api-access-2wdqq\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.147125 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-catalog-content\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.249816 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-catalog-content\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.250056 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-utilities\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.250097 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdqq\" (UniqueName: \"kubernetes.io/projected/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-kube-api-access-2wdqq\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.250455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-catalog-content\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.250500 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-utilities\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.276146 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdqq\" (UniqueName: \"kubernetes.io/projected/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-kube-api-access-2wdqq\") pod \"redhat-operators-snfnh\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.365593 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:18 crc kubenswrapper[4870]: I0130 09:06:18.861381 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-snfnh"] Jan 30 09:06:19 crc kubenswrapper[4870]: I0130 09:06:19.626511 4870 generic.go:334] "Generic (PLEG): container finished" podID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerID="e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734" exitCode=0 Jan 30 09:06:19 crc kubenswrapper[4870]: I0130 09:06:19.626577 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerDied","Data":"e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734"} Jan 30 09:06:19 crc kubenswrapper[4870]: I0130 09:06:19.627150 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerStarted","Data":"3dc7547cf515e9f204ef80e6b218eae2f4161ef84d86bd1dd66e594de10b3bf7"} Jan 30 09:06:19 crc kubenswrapper[4870]: I0130 09:06:19.630734 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:06:21 crc kubenswrapper[4870]: I0130 09:06:21.655514 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerStarted","Data":"02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622"} Jan 30 09:06:25 crc kubenswrapper[4870]: I0130 09:06:25.250220 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:06:25 crc kubenswrapper[4870]: I0130 09:06:25.250841 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:26 crc kubenswrapper[4870]: I0130 09:06:26.711929 4870 generic.go:334] "Generic (PLEG): container finished" podID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerID="02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622" exitCode=0 Jan 30 09:06:26 crc kubenswrapper[4870]: I0130 09:06:26.711978 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerDied","Data":"02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622"} Jan 30 09:06:27 crc kubenswrapper[4870]: I0130 09:06:27.722459 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerStarted","Data":"6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86"} Jan 30 09:06:27 crc kubenswrapper[4870]: I0130 09:06:27.747996 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-snfnh" podStartSLOduration=3.265277798 podStartE2EDuration="10.74797255s" podCreationTimestamp="2026-01-30 09:06:17 +0000 UTC" firstStartedPulling="2026-01-30 09:06:19.630406482 +0000 UTC m=+3418.325953591" lastFinishedPulling="2026-01-30 09:06:27.113101224 +0000 UTC m=+3425.808648343" observedRunningTime="2026-01-30 09:06:27.742052195 +0000 UTC m=+3426.437599314" watchObservedRunningTime="2026-01-30 09:06:27.74797255 +0000 UTC m=+3426.443519669" Jan 30 09:06:28 crc kubenswrapper[4870]: I0130 09:06:28.366186 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:28 crc kubenswrapper[4870]: I0130 09:06:28.366237 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:29 crc kubenswrapper[4870]: I0130 09:06:29.411965 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-snfnh" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="registry-server" probeResult="failure" output=< Jan 30 09:06:29 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 09:06:29 crc kubenswrapper[4870]: > Jan 30 09:06:38 crc kubenswrapper[4870]: I0130 09:06:38.418730 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:38 crc kubenswrapper[4870]: I0130 09:06:38.469168 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:38 crc kubenswrapper[4870]: I0130 09:06:38.659805 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-snfnh"] Jan 30 09:06:39 crc kubenswrapper[4870]: I0130 09:06:39.844506 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-snfnh" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="registry-server" containerID="cri-o://6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86" gracePeriod=2 Jan 30 09:06:40 crc kubenswrapper[4870]: E0130 09:06:40.046727 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb8d2db0_5110_4134_ab7a_df0a03ec80b4.slice/crio-6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb8d2db0_5110_4134_ab7a_df0a03ec80b4.slice/crio-conmon-6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86.scope\": RecentStats: unable to find data in memory cache]" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.412451 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.549507 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-utilities\") pod \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.550024 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-catalog-content\") pod \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.550170 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdqq\" (UniqueName: \"kubernetes.io/projected/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-kube-api-access-2wdqq\") pod \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\" (UID: \"bb8d2db0-5110-4134-ab7a-df0a03ec80b4\") " Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.550114 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-utilities" (OuterVolumeSpecName: "utilities") pod "bb8d2db0-5110-4134-ab7a-df0a03ec80b4" (UID: "bb8d2db0-5110-4134-ab7a-df0a03ec80b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.550858 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.567721 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-kube-api-access-2wdqq" (OuterVolumeSpecName: "kube-api-access-2wdqq") pod "bb8d2db0-5110-4134-ab7a-df0a03ec80b4" (UID: "bb8d2db0-5110-4134-ab7a-df0a03ec80b4"). InnerVolumeSpecName "kube-api-access-2wdqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.653333 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wdqq\" (UniqueName: \"kubernetes.io/projected/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-kube-api-access-2wdqq\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.667142 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb8d2db0-5110-4134-ab7a-df0a03ec80b4" (UID: "bb8d2db0-5110-4134-ab7a-df0a03ec80b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.755237 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8d2db0-5110-4134-ab7a-df0a03ec80b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.859729 4870 generic.go:334] "Generic (PLEG): container finished" podID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerID="6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86" exitCode=0 Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.859774 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerDied","Data":"6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86"} Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.859814 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snfnh" event={"ID":"bb8d2db0-5110-4134-ab7a-df0a03ec80b4","Type":"ContainerDied","Data":"3dc7547cf515e9f204ef80e6b218eae2f4161ef84d86bd1dd66e594de10b3bf7"} Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.859833 4870 scope.go:117] "RemoveContainer" containerID="6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.859920 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snfnh" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.889607 4870 scope.go:117] "RemoveContainer" containerID="02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.919921 4870 scope.go:117] "RemoveContainer" containerID="e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734" Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.940735 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-snfnh"] Jan 30 09:06:40 crc kubenswrapper[4870]: I0130 09:06:40.959728 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-snfnh"] Jan 30 09:06:41 crc kubenswrapper[4870]: I0130 09:06:41.014429 4870 scope.go:117] "RemoveContainer" containerID="6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86" Jan 30 09:06:41 crc kubenswrapper[4870]: E0130 09:06:41.015078 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86\": container with ID starting with 6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86 not found: ID does not exist" containerID="6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86" Jan 30 09:06:41 crc kubenswrapper[4870]: I0130 09:06:41.015123 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86"} err="failed to get container status \"6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86\": rpc error: code = NotFound desc = could not find container \"6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86\": container with ID starting with 6c012123932b91c605aa1585fbbe354ff2ad5cf49af930b09a080e233dd68a86 not found: ID does not exist" Jan 30 09:06:41 crc kubenswrapper[4870]: I0130 09:06:41.015153 4870 scope.go:117] "RemoveContainer" containerID="02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622" Jan 30 09:06:41 crc kubenswrapper[4870]: E0130 09:06:41.017106 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622\": container with ID starting with 02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622 not found: ID does not exist" containerID="02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622" Jan 30 09:06:41 crc kubenswrapper[4870]: I0130 09:06:41.017197 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622"} err="failed to get container status \"02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622\": rpc error: code = NotFound desc = could not find container \"02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622\": container with ID starting with 02481fea632748099b50b10076988a482fbae35ef176cc74caea7cfbad739622 not found: ID does not exist" Jan 30 09:06:41 crc kubenswrapper[4870]: I0130 09:06:41.017265 4870 scope.go:117] "RemoveContainer" containerID="e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734" Jan 30 09:06:41 crc kubenswrapper[4870]: E0130 09:06:41.019580 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734\": container with ID starting with e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734 not found: ID does not exist" containerID="e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734" Jan 30 09:06:41 crc kubenswrapper[4870]: I0130 09:06:41.019632 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734"} err="failed to get container status \"e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734\": rpc error: code = NotFound desc = could not find container \"e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734\": container with ID starting with e4308c32c5a5a547b265ce131e450c23aa0c352d28f57d7cd2592d8609453734 not found: ID does not exist" Jan 30 09:06:42 crc kubenswrapper[4870]: I0130 09:06:42.090106 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" path="/var/lib/kubelet/pods/bb8d2db0-5110-4134-ab7a-df0a03ec80b4/volumes" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.337499 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvc4n"] Jan 30 09:06:49 crc kubenswrapper[4870]: E0130 09:06:49.338988 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="extract-content" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.339012 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="extract-content" Jan 30 09:06:49 crc kubenswrapper[4870]: E0130 09:06:49.339044 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="extract-utilities" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.339056 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="extract-utilities" Jan 30 09:06:49 crc kubenswrapper[4870]: E0130 09:06:49.339097 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="registry-server" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.339108 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="registry-server" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.339437 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8d2db0-5110-4134-ab7a-df0a03ec80b4" containerName="registry-server" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.341951 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.355111 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvc4n"] Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.445064 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99pl2\" (UniqueName: \"kubernetes.io/projected/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-kube-api-access-99pl2\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.445118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-utilities\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.445801 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-catalog-content\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.547751 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-catalog-content\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.547821 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99pl2\" (UniqueName: \"kubernetes.io/projected/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-kube-api-access-99pl2\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.547848 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-utilities\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.548343 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-catalog-content\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.548372 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-utilities\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.574561 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99pl2\" (UniqueName: \"kubernetes.io/projected/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-kube-api-access-99pl2\") pod \"community-operators-lvc4n\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:49 crc kubenswrapper[4870]: I0130 09:06:49.687509 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:50 crc kubenswrapper[4870]: I0130 09:06:50.267898 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvc4n"] Jan 30 09:06:50 crc kubenswrapper[4870]: I0130 09:06:50.957087 4870 generic.go:334] "Generic (PLEG): container finished" podID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerID="dfc681f06958a7052b07273f65f791c31b68f37949a2c90c717d551f99d10c11" exitCode=0 Jan 30 09:06:50 crc kubenswrapper[4870]: I0130 09:06:50.957244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerDied","Data":"dfc681f06958a7052b07273f65f791c31b68f37949a2c90c717d551f99d10c11"} Jan 30 09:06:50 crc kubenswrapper[4870]: I0130 09:06:50.957469 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerStarted","Data":"2b3d03bb0205d3f076d27bc63ac73956f2d285d0e523e900185a3e770c082a31"} Jan 30 09:06:51 crc kubenswrapper[4870]: I0130 09:06:51.967420 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerStarted","Data":"ab75b70c52c6135527b84b8c4e2005f9cc37ba58c868e52dc9783f77ed7c5b41"} Jan 30 09:06:53 crc kubenswrapper[4870]: I0130 09:06:53.984661 4870 generic.go:334] "Generic (PLEG): container finished" podID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerID="ab75b70c52c6135527b84b8c4e2005f9cc37ba58c868e52dc9783f77ed7c5b41" exitCode=0 Jan 30 09:06:53 crc kubenswrapper[4870]: I0130 09:06:53.984718 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerDied","Data":"ab75b70c52c6135527b84b8c4e2005f9cc37ba58c868e52dc9783f77ed7c5b41"} Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.016272 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerStarted","Data":"313fd118fe8f2f8b51760cde13bda6c7424565b20047111d85a7a318625300f6"} Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.040099 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvc4n" podStartSLOduration=2.423203336 podStartE2EDuration="6.040074178s" podCreationTimestamp="2026-01-30 09:06:49 +0000 UTC" firstStartedPulling="2026-01-30 09:06:50.960612453 +0000 UTC m=+3449.656159562" lastFinishedPulling="2026-01-30 09:06:54.577483295 +0000 UTC m=+3453.273030404" observedRunningTime="2026-01-30 09:06:55.036264718 +0000 UTC m=+3453.731811827" watchObservedRunningTime="2026-01-30 09:06:55.040074178 +0000 UTC m=+3453.735621277" Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.249955 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.250020 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.250075 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.250938 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17820b765eafb63916466378e99044acfb43003b6255354ee74c2b5d6f218271"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:06:55 crc kubenswrapper[4870]: I0130 09:06:55.251000 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://17820b765eafb63916466378e99044acfb43003b6255354ee74c2b5d6f218271" gracePeriod=600 Jan 30 09:06:56 crc kubenswrapper[4870]: I0130 09:06:56.028217 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="17820b765eafb63916466378e99044acfb43003b6255354ee74c2b5d6f218271" exitCode=0 Jan 30 09:06:56 crc kubenswrapper[4870]: I0130 09:06:56.028413 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"17820b765eafb63916466378e99044acfb43003b6255354ee74c2b5d6f218271"} Jan 30 09:06:56 crc kubenswrapper[4870]: I0130 09:06:56.028670 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271"} Jan 30 09:06:56 crc kubenswrapper[4870]: I0130 09:06:56.028691 4870 scope.go:117] "RemoveContainer" containerID="166951655cf7460235835367ab57cecab9ac55ffa7af60737e9d2ae75e6d1283" Jan 30 09:06:59 crc kubenswrapper[4870]: I0130 09:06:59.687645 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:59 crc kubenswrapper[4870]: I0130 09:06:59.688259 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:06:59 crc kubenswrapper[4870]: I0130 09:06:59.751484 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:07:00 crc kubenswrapper[4870]: I0130 09:07:00.159313 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:07:00 crc kubenswrapper[4870]: I0130 09:07:00.212180 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvc4n"] Jan 30 09:07:02 crc kubenswrapper[4870]: I0130 09:07:02.124045 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvc4n" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="registry-server" containerID="cri-o://313fd118fe8f2f8b51760cde13bda6c7424565b20047111d85a7a318625300f6" gracePeriod=2 Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.133736 4870 generic.go:334] "Generic (PLEG): container finished" podID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerID="313fd118fe8f2f8b51760cde13bda6c7424565b20047111d85a7a318625300f6" exitCode=0 Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.133812 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerDied","Data":"313fd118fe8f2f8b51760cde13bda6c7424565b20047111d85a7a318625300f6"} Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.251050 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.374220 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-catalog-content\") pod \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.374395 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99pl2\" (UniqueName: \"kubernetes.io/projected/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-kube-api-access-99pl2\") pod \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.374444 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-utilities\") pod \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\" (UID: \"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5\") " Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.375258 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-utilities" (OuterVolumeSpecName: "utilities") pod "79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" (UID: "79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.379973 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-kube-api-access-99pl2" (OuterVolumeSpecName: "kube-api-access-99pl2") pod "79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" (UID: "79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5"). InnerVolumeSpecName "kube-api-access-99pl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.439971 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" (UID: "79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.476225 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.476252 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99pl2\" (UniqueName: \"kubernetes.io/projected/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-kube-api-access-99pl2\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:03 crc kubenswrapper[4870]: I0130 09:07:03.476265 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.150386 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvc4n" event={"ID":"79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5","Type":"ContainerDied","Data":"2b3d03bb0205d3f076d27bc63ac73956f2d285d0e523e900185a3e770c082a31"} Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.150443 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvc4n" Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.150801 4870 scope.go:117] "RemoveContainer" containerID="313fd118fe8f2f8b51760cde13bda6c7424565b20047111d85a7a318625300f6" Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.190849 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvc4n"] Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.195868 4870 scope.go:117] "RemoveContainer" containerID="ab75b70c52c6135527b84b8c4e2005f9cc37ba58c868e52dc9783f77ed7c5b41" Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.200813 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvc4n"] Jan 30 09:07:04 crc kubenswrapper[4870]: I0130 09:07:04.219137 4870 scope.go:117] "RemoveContainer" containerID="dfc681f06958a7052b07273f65f791c31b68f37949a2c90c717d551f99d10c11" Jan 30 09:07:06 crc kubenswrapper[4870]: I0130 09:07:06.091669 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" path="/var/lib/kubelet/pods/79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5/volumes" Jan 30 09:08:55 crc kubenswrapper[4870]: I0130 09:08:55.249338 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:08:55 crc kubenswrapper[4870]: I0130 09:08:55.249961 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:09:25 crc kubenswrapper[4870]: I0130 09:09:25.250031 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:09:25 crc kubenswrapper[4870]: I0130 09:09:25.250835 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.250248 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.250997 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.251078 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.252175 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.252273 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" gracePeriod=600 Jan 30 09:09:55 crc kubenswrapper[4870]: E0130 09:09:55.390015 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.938263 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" exitCode=0 Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.938332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271"} Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.938607 4870 scope.go:117] "RemoveContainer" containerID="17820b765eafb63916466378e99044acfb43003b6255354ee74c2b5d6f218271" Jan 30 09:09:55 crc kubenswrapper[4870]: I0130 09:09:55.939214 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:09:55 crc kubenswrapper[4870]: E0130 09:09:55.939447 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:10:10 crc kubenswrapper[4870]: I0130 09:10:10.079100 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:10:10 crc kubenswrapper[4870]: E0130 09:10:10.080077 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:10:22 crc kubenswrapper[4870]: I0130 09:10:22.081236 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:10:22 crc kubenswrapper[4870]: E0130 09:10:22.082039 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:10:36 crc kubenswrapper[4870]: I0130 09:10:36.079759 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:10:36 crc kubenswrapper[4870]: E0130 09:10:36.084586 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:10:50 crc kubenswrapper[4870]: I0130 09:10:50.074579 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:10:50 crc kubenswrapper[4870]: E0130 09:10:50.075531 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:11:05 crc kubenswrapper[4870]: I0130 09:11:05.075201 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:11:05 crc kubenswrapper[4870]: E0130 09:11:05.076403 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:11:18 crc kubenswrapper[4870]: I0130 09:11:18.075204 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:11:18 crc kubenswrapper[4870]: E0130 09:11:18.076120 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:11:33 crc kubenswrapper[4870]: I0130 09:11:33.075006 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:11:33 crc kubenswrapper[4870]: E0130 09:11:33.075755 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.318167 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n2nf2"] Jan 30 09:11:34 crc kubenswrapper[4870]: E0130 09:11:34.319217 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="extract-utilities" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.319233 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="extract-utilities" Jan 30 09:11:34 crc kubenswrapper[4870]: E0130 09:11:34.319241 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="registry-server" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.319247 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="registry-server" Jan 30 09:11:34 crc kubenswrapper[4870]: E0130 09:11:34.319291 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="extract-content" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.319299 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="extract-content" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.319500 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a84ac4-c8e1-43b6-a53a-1402a9c2c1a5" containerName="registry-server" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.320833 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.344129 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2nf2"] Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.479586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hshs\" (UniqueName: \"kubernetes.io/projected/0b0a02d2-bfb4-4725-838b-bda2924a28d6-kube-api-access-8hshs\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.479983 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-catalog-content\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.480148 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-utilities\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.582336 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hshs\" (UniqueName: \"kubernetes.io/projected/0b0a02d2-bfb4-4725-838b-bda2924a28d6-kube-api-access-8hshs\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.582462 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-catalog-content\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.582540 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-utilities\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.583029 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-utilities\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.583036 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-catalog-content\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.607973 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hshs\" (UniqueName: \"kubernetes.io/projected/0b0a02d2-bfb4-4725-838b-bda2924a28d6-kube-api-access-8hshs\") pod \"redhat-marketplace-n2nf2\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:34 crc kubenswrapper[4870]: I0130 09:11:34.651826 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:35 crc kubenswrapper[4870]: I0130 09:11:35.153699 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2nf2"] Jan 30 09:11:36 crc kubenswrapper[4870]: I0130 09:11:36.008677 4870 generic.go:334] "Generic (PLEG): container finished" podID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerID="43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77" exitCode=0 Jan 30 09:11:36 crc kubenswrapper[4870]: I0130 09:11:36.008736 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2nf2" event={"ID":"0b0a02d2-bfb4-4725-838b-bda2924a28d6","Type":"ContainerDied","Data":"43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77"} Jan 30 09:11:36 crc kubenswrapper[4870]: I0130 09:11:36.009080 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2nf2" event={"ID":"0b0a02d2-bfb4-4725-838b-bda2924a28d6","Type":"ContainerStarted","Data":"1d1de4ca13f9f6608f25d6cd27b3c679f2cf0a58ab225bd7e3be562c7a5733eb"} Jan 30 09:11:36 crc kubenswrapper[4870]: I0130 09:11:36.012560 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:11:38 crc kubenswrapper[4870]: I0130 09:11:38.034177 4870 generic.go:334] "Generic (PLEG): container finished" podID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerID="4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4" exitCode=0 Jan 30 09:11:38 crc kubenswrapper[4870]: I0130 09:11:38.034244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2nf2" event={"ID":"0b0a02d2-bfb4-4725-838b-bda2924a28d6","Type":"ContainerDied","Data":"4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4"} Jan 30 09:11:39 crc kubenswrapper[4870]: I0130 09:11:39.048779 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2nf2" event={"ID":"0b0a02d2-bfb4-4725-838b-bda2924a28d6","Type":"ContainerStarted","Data":"8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a"} Jan 30 09:11:39 crc kubenswrapper[4870]: I0130 09:11:39.082201 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n2nf2" podStartSLOduration=2.644069479 podStartE2EDuration="5.082176402s" podCreationTimestamp="2026-01-30 09:11:34 +0000 UTC" firstStartedPulling="2026-01-30 09:11:36.012269869 +0000 UTC m=+3734.707816978" lastFinishedPulling="2026-01-30 09:11:38.450376772 +0000 UTC m=+3737.145923901" observedRunningTime="2026-01-30 09:11:39.07377465 +0000 UTC m=+3737.769321769" watchObservedRunningTime="2026-01-30 09:11:39.082176402 +0000 UTC m=+3737.777723511" Jan 30 09:11:44 crc kubenswrapper[4870]: I0130 09:11:44.307683 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:11:44 crc kubenswrapper[4870]: E0130 09:11:44.309021 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:11:44 crc kubenswrapper[4870]: I0130 09:11:44.652643 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:44 crc kubenswrapper[4870]: I0130 09:11:44.652799 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:44 crc kubenswrapper[4870]: I0130 09:11:44.726739 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:45 crc kubenswrapper[4870]: I0130 09:11:45.387227 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:45 crc kubenswrapper[4870]: I0130 09:11:45.441506 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2nf2"] Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.351699 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n2nf2" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="registry-server" containerID="cri-o://8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a" gracePeriod=2 Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.843747 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.977068 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-utilities\") pod \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.977309 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hshs\" (UniqueName: \"kubernetes.io/projected/0b0a02d2-bfb4-4725-838b-bda2924a28d6-kube-api-access-8hshs\") pod \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.977416 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-catalog-content\") pod \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\" (UID: \"0b0a02d2-bfb4-4725-838b-bda2924a28d6\") " Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.978814 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-utilities" (OuterVolumeSpecName: "utilities") pod "0b0a02d2-bfb4-4725-838b-bda2924a28d6" (UID: "0b0a02d2-bfb4-4725-838b-bda2924a28d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:11:47 crc kubenswrapper[4870]: I0130 09:11:47.994887 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0a02d2-bfb4-4725-838b-bda2924a28d6-kube-api-access-8hshs" (OuterVolumeSpecName: "kube-api-access-8hshs") pod "0b0a02d2-bfb4-4725-838b-bda2924a28d6" (UID: "0b0a02d2-bfb4-4725-838b-bda2924a28d6"). InnerVolumeSpecName "kube-api-access-8hshs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.022370 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b0a02d2-bfb4-4725-838b-bda2924a28d6" (UID: "0b0a02d2-bfb4-4725-838b-bda2924a28d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.079304 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hshs\" (UniqueName: \"kubernetes.io/projected/0b0a02d2-bfb4-4725-838b-bda2924a28d6-kube-api-access-8hshs\") on node \"crc\" DevicePath \"\"" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.079340 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.079352 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0a02d2-bfb4-4725-838b-bda2924a28d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.364480 4870 generic.go:334] "Generic (PLEG): container finished" podID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerID="8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a" exitCode=0 Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.364526 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2nf2" event={"ID":"0b0a02d2-bfb4-4725-838b-bda2924a28d6","Type":"ContainerDied","Data":"8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a"} Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.364800 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2nf2" event={"ID":"0b0a02d2-bfb4-4725-838b-bda2924a28d6","Type":"ContainerDied","Data":"1d1de4ca13f9f6608f25d6cd27b3c679f2cf0a58ab225bd7e3be562c7a5733eb"} Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.364823 4870 scope.go:117] "RemoveContainer" containerID="8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.364552 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2nf2" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.399322 4870 scope.go:117] "RemoveContainer" containerID="4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.409603 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2nf2"] Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.418905 4870 scope.go:117] "RemoveContainer" containerID="43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.423667 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2nf2"] Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.487655 4870 scope.go:117] "RemoveContainer" containerID="8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a" Jan 30 09:11:48 crc kubenswrapper[4870]: E0130 09:11:48.489308 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a\": container with ID starting with 8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a not found: ID does not exist" containerID="8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.489360 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a"} err="failed to get container status \"8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a\": rpc error: code = NotFound desc = could not find container \"8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a\": container with ID starting with 8d3386c464d49e524eb2a7cdc9cea31342222d28508dcddb5b16f2efa933c76a not found: ID does not exist" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.489387 4870 scope.go:117] "RemoveContainer" containerID="4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4" Jan 30 09:11:48 crc kubenswrapper[4870]: E0130 09:11:48.489711 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4\": container with ID starting with 4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4 not found: ID does not exist" containerID="4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.489731 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4"} err="failed to get container status \"4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4\": rpc error: code = NotFound desc = could not find container \"4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4\": container with ID starting with 4593f224dcf2b275752cd45222193503c76f7026129b283d30cb9747bfbd58f4 not found: ID does not exist" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.489750 4870 scope.go:117] "RemoveContainer" containerID="43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77" Jan 30 09:11:48 crc kubenswrapper[4870]: E0130 09:11:48.489947 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77\": container with ID starting with 43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77 not found: ID does not exist" containerID="43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77" Jan 30 09:11:48 crc kubenswrapper[4870]: I0130 09:11:48.489969 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77"} err="failed to get container status \"43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77\": rpc error: code = NotFound desc = could not find container \"43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77\": container with ID starting with 43803059d2288b81f6f92667f5cc43f474d24e8dc56036acc253d6ea3bfd7b77 not found: ID does not exist" Jan 30 09:11:50 crc kubenswrapper[4870]: I0130 09:11:50.087651 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" path="/var/lib/kubelet/pods/0b0a02d2-bfb4-4725-838b-bda2924a28d6/volumes" Jan 30 09:11:55 crc kubenswrapper[4870]: I0130 09:11:55.075100 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:11:55 crc kubenswrapper[4870]: E0130 09:11:55.076542 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:12:07 crc kubenswrapper[4870]: I0130 09:12:07.075054 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:12:07 crc kubenswrapper[4870]: E0130 09:12:07.075926 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:12:19 crc kubenswrapper[4870]: I0130 09:12:19.075624 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:12:19 crc kubenswrapper[4870]: E0130 09:12:19.076325 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:12:32 crc kubenswrapper[4870]: I0130 09:12:32.080801 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:12:32 crc kubenswrapper[4870]: E0130 09:12:32.081574 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:12:44 crc kubenswrapper[4870]: I0130 09:12:44.075203 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:12:44 crc kubenswrapper[4870]: E0130 09:12:44.076224 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:12:56 crc kubenswrapper[4870]: I0130 09:12:56.075480 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:12:56 crc kubenswrapper[4870]: E0130 09:12:56.076784 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:13:11 crc kubenswrapper[4870]: I0130 09:13:11.074813 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:13:11 crc kubenswrapper[4870]: E0130 09:13:11.075680 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:13:25 crc kubenswrapper[4870]: I0130 09:13:25.075229 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:13:25 crc kubenswrapper[4870]: E0130 09:13:25.076385 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:13:40 crc kubenswrapper[4870]: I0130 09:13:40.074642 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:13:40 crc kubenswrapper[4870]: E0130 09:13:40.075572 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:13:51 crc kubenswrapper[4870]: I0130 09:13:51.074939 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:13:51 crc kubenswrapper[4870]: E0130 09:13:51.076277 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:14:03 crc kubenswrapper[4870]: I0130 09:14:03.074859 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:14:03 crc kubenswrapper[4870]: E0130 09:14:03.075791 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:14:18 crc kubenswrapper[4870]: I0130 09:14:18.074789 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:14:18 crc kubenswrapper[4870]: E0130 09:14:18.075635 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:14:30 crc kubenswrapper[4870]: I0130 09:14:30.075089 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:14:30 crc kubenswrapper[4870]: E0130 09:14:30.076219 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:14:43 crc kubenswrapper[4870]: I0130 09:14:43.074785 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:14:43 crc kubenswrapper[4870]: E0130 09:14:43.075634 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:14:58 crc kubenswrapper[4870]: I0130 09:14:58.074961 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:14:59 crc kubenswrapper[4870]: I0130 09:14:59.254241 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"78ce0353b7c3323cf1e9811378a7fe1749f61d99675d2c6c610bade4f28a064b"} Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.206379 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n"] Jan 30 09:15:00 crc kubenswrapper[4870]: E0130 09:15:00.207216 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="extract-content" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.207244 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="extract-content" Jan 30 09:15:00 crc kubenswrapper[4870]: E0130 09:15:00.207269 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="registry-server" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.207278 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="registry-server" Jan 30 09:15:00 crc kubenswrapper[4870]: E0130 09:15:00.207311 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="extract-utilities" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.207319 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="extract-utilities" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.207577 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0a02d2-bfb4-4725-838b-bda2924a28d6" containerName="registry-server" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.208473 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.210154 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.210791 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.227461 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n"] Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.280338 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nn9\" (UniqueName: \"kubernetes.io/projected/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-kube-api-access-v5nn9\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.280664 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-secret-volume\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.280774 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-config-volume\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.382509 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nn9\" (UniqueName: \"kubernetes.io/projected/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-kube-api-access-v5nn9\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.382692 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-secret-volume\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.382755 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-config-volume\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.383841 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-config-volume\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.395178 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-secret-volume\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.402342 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nn9\" (UniqueName: \"kubernetes.io/projected/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-kube-api-access-v5nn9\") pod \"collect-profiles-29496075-p689n\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:00 crc kubenswrapper[4870]: I0130 09:15:00.543927 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:01 crc kubenswrapper[4870]: I0130 09:15:01.032281 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n"] Jan 30 09:15:01 crc kubenswrapper[4870]: I0130 09:15:01.271582 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" event={"ID":"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f","Type":"ContainerStarted","Data":"57695176c047a4d2a3192f1a496290ea01944208e9a6861fc8400e87ad5e75a8"} Jan 30 09:15:01 crc kubenswrapper[4870]: I0130 09:15:01.271634 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" event={"ID":"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f","Type":"ContainerStarted","Data":"6de73acc6cb9cb438f39e404589a32d8071470536db0821999d3d4de93352672"} Jan 30 09:15:01 crc kubenswrapper[4870]: I0130 09:15:01.291006 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" podStartSLOduration=1.290982603 podStartE2EDuration="1.290982603s" podCreationTimestamp="2026-01-30 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:15:01.28608401 +0000 UTC m=+3939.981631129" watchObservedRunningTime="2026-01-30 09:15:01.290982603 +0000 UTC m=+3939.986529732" Jan 30 09:15:02 crc kubenswrapper[4870]: I0130 09:15:02.283212 4870 generic.go:334] "Generic (PLEG): container finished" podID="c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" containerID="57695176c047a4d2a3192f1a496290ea01944208e9a6861fc8400e87ad5e75a8" exitCode=0 Jan 30 09:15:02 crc kubenswrapper[4870]: I0130 09:15:02.283287 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" event={"ID":"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f","Type":"ContainerDied","Data":"57695176c047a4d2a3192f1a496290ea01944208e9a6861fc8400e87ad5e75a8"} Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.733906 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.864099 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-secret-volume\") pod \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.864506 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-config-volume\") pod \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.865009 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" (UID: "c87e8dab-4aab-4cd8-bcb4-29d45c0f884f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.865303 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5nn9\" (UniqueName: \"kubernetes.io/projected/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-kube-api-access-v5nn9\") pod \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\" (UID: \"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f\") " Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.866098 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.873757 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-kube-api-access-v5nn9" (OuterVolumeSpecName: "kube-api-access-v5nn9") pod "c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" (UID: "c87e8dab-4aab-4cd8-bcb4-29d45c0f884f"). InnerVolumeSpecName "kube-api-access-v5nn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.878230 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" (UID: "c87e8dab-4aab-4cd8-bcb4-29d45c0f884f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.967756 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5nn9\" (UniqueName: \"kubernetes.io/projected/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-kube-api-access-v5nn9\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:03 crc kubenswrapper[4870]: I0130 09:15:03.967803 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c87e8dab-4aab-4cd8-bcb4-29d45c0f884f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:04 crc kubenswrapper[4870]: I0130 09:15:04.301523 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" event={"ID":"c87e8dab-4aab-4cd8-bcb4-29d45c0f884f","Type":"ContainerDied","Data":"6de73acc6cb9cb438f39e404589a32d8071470536db0821999d3d4de93352672"} Jan 30 09:15:04 crc kubenswrapper[4870]: I0130 09:15:04.301561 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de73acc6cb9cb438f39e404589a32d8071470536db0821999d3d4de93352672" Jan 30 09:15:04 crc kubenswrapper[4870]: I0130 09:15:04.301583 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-p689n" Jan 30 09:15:04 crc kubenswrapper[4870]: I0130 09:15:04.378809 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c"] Jan 30 09:15:04 crc kubenswrapper[4870]: I0130 09:15:04.391380 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-smd5c"] Jan 30 09:15:06 crc kubenswrapper[4870]: I0130 09:15:06.089660 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecb0f40-780e-4f90-84aa-17af92178d88" path="/var/lib/kubelet/pods/3ecb0f40-780e-4f90-84aa-17af92178d88/volumes" Jan 30 09:15:38 crc kubenswrapper[4870]: I0130 09:15:38.714206 4870 scope.go:117] "RemoveContainer" containerID="dad0dbfc8aebf8b014e37d2f50b6d2deebcdfeb8419d761ea14c44680273c1c3" Jan 30 09:16:20 crc kubenswrapper[4870]: I0130 09:16:20.939175 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dbjtr"] Jan 30 09:16:20 crc kubenswrapper[4870]: E0130 09:16:20.940039 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" containerName="collect-profiles" Jan 30 09:16:20 crc kubenswrapper[4870]: I0130 09:16:20.940053 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" containerName="collect-profiles" Jan 30 09:16:20 crc kubenswrapper[4870]: I0130 09:16:20.940266 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87e8dab-4aab-4cd8-bcb4-29d45c0f884f" containerName="collect-profiles" Jan 30 09:16:20 crc kubenswrapper[4870]: I0130 09:16:20.941599 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:20 crc kubenswrapper[4870]: I0130 09:16:20.968548 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbjtr"] Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.059599 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-utilities\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.059694 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-catalog-content\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.059763 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwkbc\" (UniqueName: \"kubernetes.io/projected/aad36608-62a8-434a-899e-2383285678ba-kube-api-access-cwkbc\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.162640 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-utilities\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.162784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-catalog-content\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.162929 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwkbc\" (UniqueName: \"kubernetes.io/projected/aad36608-62a8-434a-899e-2383285678ba-kube-api-access-cwkbc\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.163188 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-utilities\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.163354 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-catalog-content\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.199592 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwkbc\" (UniqueName: \"kubernetes.io/projected/aad36608-62a8-434a-899e-2383285678ba-kube-api-access-cwkbc\") pod \"redhat-operators-dbjtr\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.279208 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:21 crc kubenswrapper[4870]: I0130 09:16:21.784688 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbjtr"] Jan 30 09:16:22 crc kubenswrapper[4870]: I0130 09:16:22.128532 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerStarted","Data":"52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b"} Jan 30 09:16:22 crc kubenswrapper[4870]: I0130 09:16:22.129647 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerStarted","Data":"1c7d72833670a470a189ef9ff7a4db35f8fb4ede35202afd047cf7958ef6ba71"} Jan 30 09:16:23 crc kubenswrapper[4870]: I0130 09:16:23.148561 4870 generic.go:334] "Generic (PLEG): container finished" podID="aad36608-62a8-434a-899e-2383285678ba" containerID="52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b" exitCode=0 Jan 30 09:16:23 crc kubenswrapper[4870]: I0130 09:16:23.148672 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerDied","Data":"52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b"} Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.136858 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjl7n"] Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.140171 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.150827 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjl7n"] Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.339765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-utilities\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.340154 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6k9s\" (UniqueName: \"kubernetes.io/projected/d63faf5c-d054-4828-b211-c0f100f1f4ca-kube-api-access-n6k9s\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.340334 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-catalog-content\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.442894 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-utilities\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.442962 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6k9s\" (UniqueName: \"kubernetes.io/projected/d63faf5c-d054-4828-b211-c0f100f1f4ca-kube-api-access-n6k9s\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.443048 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-catalog-content\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.443585 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-utilities\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.443636 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-catalog-content\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.471744 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6k9s\" (UniqueName: \"kubernetes.io/projected/d63faf5c-d054-4828-b211-c0f100f1f4ca-kube-api-access-n6k9s\") pod \"certified-operators-jjl7n\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:24 crc kubenswrapper[4870]: I0130 09:16:24.762285 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:25 crc kubenswrapper[4870]: I0130 09:16:25.170577 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerStarted","Data":"3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e"} Jan 30 09:16:25 crc kubenswrapper[4870]: I0130 09:16:25.371395 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjl7n"] Jan 30 09:16:26 crc kubenswrapper[4870]: I0130 09:16:26.185764 4870 generic.go:334] "Generic (PLEG): container finished" podID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerID="9ce88b36b7f8bb4c84d2e0cb990ff2dfc0b7ef3e5a70fa171000086079fff96f" exitCode=0 Jan 30 09:16:26 crc kubenswrapper[4870]: I0130 09:16:26.186866 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerDied","Data":"9ce88b36b7f8bb4c84d2e0cb990ff2dfc0b7ef3e5a70fa171000086079fff96f"} Jan 30 09:16:26 crc kubenswrapper[4870]: I0130 09:16:26.186963 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerStarted","Data":"15ef4d9c1175afeaf8942eee6fe6591c948a8b9b5f0d512ace1d005e4aefce89"} Jan 30 09:16:28 crc kubenswrapper[4870]: I0130 09:16:28.229580 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerStarted","Data":"03a4e150a1883a14201e377adab34fa66160798fa54ff5e83c49cff343c943f0"} Jan 30 09:16:31 crc kubenswrapper[4870]: I0130 09:16:31.269827 4870 generic.go:334] "Generic (PLEG): container finished" podID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerID="03a4e150a1883a14201e377adab34fa66160798fa54ff5e83c49cff343c943f0" exitCode=0 Jan 30 09:16:31 crc kubenswrapper[4870]: I0130 09:16:31.269922 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerDied","Data":"03a4e150a1883a14201e377adab34fa66160798fa54ff5e83c49cff343c943f0"} Jan 30 09:16:32 crc kubenswrapper[4870]: I0130 09:16:32.286614 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerStarted","Data":"9d90852ba42919283ddf463edcc40bc4b7429a7159617021ec5a64efb04dbac5"} Jan 30 09:16:32 crc kubenswrapper[4870]: I0130 09:16:32.290149 4870 generic.go:334] "Generic (PLEG): container finished" podID="aad36608-62a8-434a-899e-2383285678ba" containerID="3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e" exitCode=0 Jan 30 09:16:32 crc kubenswrapper[4870]: I0130 09:16:32.290191 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerDied","Data":"3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e"} Jan 30 09:16:32 crc kubenswrapper[4870]: I0130 09:16:32.326511 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjl7n" podStartSLOduration=2.760471499 podStartE2EDuration="8.326494217s" podCreationTimestamp="2026-01-30 09:16:24 +0000 UTC" firstStartedPulling="2026-01-30 09:16:26.197945676 +0000 UTC m=+4024.893492825" lastFinishedPulling="2026-01-30 09:16:31.763968424 +0000 UTC m=+4030.459515543" observedRunningTime="2026-01-30 09:16:32.316713862 +0000 UTC m=+4031.012260971" watchObservedRunningTime="2026-01-30 09:16:32.326494217 +0000 UTC m=+4031.022041316" Jan 30 09:16:33 crc kubenswrapper[4870]: I0130 09:16:33.331488 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerStarted","Data":"b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9"} Jan 30 09:16:33 crc kubenswrapper[4870]: I0130 09:16:33.360844 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dbjtr" podStartSLOduration=3.627122035 podStartE2EDuration="13.360821339s" podCreationTimestamp="2026-01-30 09:16:20 +0000 UTC" firstStartedPulling="2026-01-30 09:16:23.151988243 +0000 UTC m=+4021.847535352" lastFinishedPulling="2026-01-30 09:16:32.885687547 +0000 UTC m=+4031.581234656" observedRunningTime="2026-01-30 09:16:33.352480429 +0000 UTC m=+4032.048027538" watchObservedRunningTime="2026-01-30 09:16:33.360821339 +0000 UTC m=+4032.056368448" Jan 30 09:16:34 crc kubenswrapper[4870]: I0130 09:16:34.762817 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:34 crc kubenswrapper[4870]: I0130 09:16:34.763219 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:35 crc kubenswrapper[4870]: I0130 09:16:35.828371 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jjl7n" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="registry-server" probeResult="failure" output=< Jan 30 09:16:35 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 09:16:35 crc kubenswrapper[4870]: > Jan 30 09:16:41 crc kubenswrapper[4870]: I0130 09:16:41.279727 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:41 crc kubenswrapper[4870]: I0130 09:16:41.281028 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:42 crc kubenswrapper[4870]: I0130 09:16:42.351058 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dbjtr" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="registry-server" probeResult="failure" output=< Jan 30 09:16:42 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 09:16:42 crc kubenswrapper[4870]: > Jan 30 09:16:46 crc kubenswrapper[4870]: I0130 09:16:46.281323 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jjl7n" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="registry-server" probeResult="failure" output=< Jan 30 09:16:46 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 09:16:46 crc kubenswrapper[4870]: > Jan 30 09:16:51 crc kubenswrapper[4870]: I0130 09:16:51.373015 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:51 crc kubenswrapper[4870]: I0130 09:16:51.440645 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:52 crc kubenswrapper[4870]: I0130 09:16:52.143745 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbjtr"] Jan 30 09:16:52 crc kubenswrapper[4870]: I0130 09:16:52.529288 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dbjtr" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="registry-server" containerID="cri-o://b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9" gracePeriod=2 Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.086598 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.197243 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-catalog-content\") pod \"aad36608-62a8-434a-899e-2383285678ba\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.197339 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-utilities\") pod \"aad36608-62a8-434a-899e-2383285678ba\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.197386 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwkbc\" (UniqueName: \"kubernetes.io/projected/aad36608-62a8-434a-899e-2383285678ba-kube-api-access-cwkbc\") pod \"aad36608-62a8-434a-899e-2383285678ba\" (UID: \"aad36608-62a8-434a-899e-2383285678ba\") " Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.198414 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-utilities" (OuterVolumeSpecName: "utilities") pod "aad36608-62a8-434a-899e-2383285678ba" (UID: "aad36608-62a8-434a-899e-2383285678ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.202389 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad36608-62a8-434a-899e-2383285678ba-kube-api-access-cwkbc" (OuterVolumeSpecName: "kube-api-access-cwkbc") pod "aad36608-62a8-434a-899e-2383285678ba" (UID: "aad36608-62a8-434a-899e-2383285678ba"). InnerVolumeSpecName "kube-api-access-cwkbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.299750 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.299798 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwkbc\" (UniqueName: \"kubernetes.io/projected/aad36608-62a8-434a-899e-2383285678ba-kube-api-access-cwkbc\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.336896 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aad36608-62a8-434a-899e-2383285678ba" (UID: "aad36608-62a8-434a-899e-2383285678ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.401396 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad36608-62a8-434a-899e-2383285678ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.543534 4870 generic.go:334] "Generic (PLEG): container finished" podID="aad36608-62a8-434a-899e-2383285678ba" containerID="b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9" exitCode=0 Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.543602 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerDied","Data":"b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9"} Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.545591 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbjtr" event={"ID":"aad36608-62a8-434a-899e-2383285678ba","Type":"ContainerDied","Data":"1c7d72833670a470a189ef9ff7a4db35f8fb4ede35202afd047cf7958ef6ba71"} Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.543619 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbjtr" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.545648 4870 scope.go:117] "RemoveContainer" containerID="b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.586709 4870 scope.go:117] "RemoveContainer" containerID="3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.588057 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbjtr"] Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.598595 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dbjtr"] Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.618834 4870 scope.go:117] "RemoveContainer" containerID="52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.664009 4870 scope.go:117] "RemoveContainer" containerID="b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9" Jan 30 09:16:53 crc kubenswrapper[4870]: E0130 09:16:53.664521 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9\": container with ID starting with b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9 not found: ID does not exist" containerID="b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.664566 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9"} err="failed to get container status \"b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9\": rpc error: code = NotFound desc = could not find container \"b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9\": container with ID starting with b2a057e454fb8ea1e6f538c999f8939fb5e124fde99577ab0faebaa82d0442b9 not found: ID does not exist" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.664587 4870 scope.go:117] "RemoveContainer" containerID="3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e" Jan 30 09:16:53 crc kubenswrapper[4870]: E0130 09:16:53.664816 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e\": container with ID starting with 3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e not found: ID does not exist" containerID="3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.664907 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e"} err="failed to get container status \"3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e\": rpc error: code = NotFound desc = could not find container \"3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e\": container with ID starting with 3703c7a0066a6584621728d0695721b30bca7cbc891760dbc17a30dbc0409a7e not found: ID does not exist" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.664992 4870 scope.go:117] "RemoveContainer" containerID="52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b" Jan 30 09:16:53 crc kubenswrapper[4870]: E0130 09:16:53.665198 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b\": container with ID starting with 52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b not found: ID does not exist" containerID="52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b" Jan 30 09:16:53 crc kubenswrapper[4870]: I0130 09:16:53.665227 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b"} err="failed to get container status \"52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b\": rpc error: code = NotFound desc = could not find container \"52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b\": container with ID starting with 52dd63245a1d7e8beb539c26a9811344da5f713cd40dc5ebd64a530ff9ba888b not found: ID does not exist" Jan 30 09:16:54 crc kubenswrapper[4870]: I0130 09:16:54.094268 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad36608-62a8-434a-899e-2383285678ba" path="/var/lib/kubelet/pods/aad36608-62a8-434a-899e-2383285678ba/volumes" Jan 30 09:16:54 crc kubenswrapper[4870]: I0130 09:16:54.815728 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:54 crc kubenswrapper[4870]: I0130 09:16:54.888054 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.141317 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjl7n"] Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.142285 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jjl7n" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="registry-server" containerID="cri-o://9d90852ba42919283ddf463edcc40bc4b7429a7159617021ec5a64efb04dbac5" gracePeriod=2 Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.602803 4870 generic.go:334] "Generic (PLEG): container finished" podID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerID="9d90852ba42919283ddf463edcc40bc4b7429a7159617021ec5a64efb04dbac5" exitCode=0 Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.603133 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerDied","Data":"9d90852ba42919283ddf463edcc40bc4b7429a7159617021ec5a64efb04dbac5"} Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.603186 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjl7n" event={"ID":"d63faf5c-d054-4828-b211-c0f100f1f4ca","Type":"ContainerDied","Data":"15ef4d9c1175afeaf8942eee6fe6591c948a8b9b5f0d512ace1d005e4aefce89"} Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.603200 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ef4d9c1175afeaf8942eee6fe6591c948a8b9b5f0d512ace1d005e4aefce89" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.696294 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.804859 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-utilities\") pod \"d63faf5c-d054-4828-b211-c0f100f1f4ca\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.805312 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6k9s\" (UniqueName: \"kubernetes.io/projected/d63faf5c-d054-4828-b211-c0f100f1f4ca-kube-api-access-n6k9s\") pod \"d63faf5c-d054-4828-b211-c0f100f1f4ca\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.805427 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-catalog-content\") pod \"d63faf5c-d054-4828-b211-c0f100f1f4ca\" (UID: \"d63faf5c-d054-4828-b211-c0f100f1f4ca\") " Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.805984 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-utilities" (OuterVolumeSpecName: "utilities") pod "d63faf5c-d054-4828-b211-c0f100f1f4ca" (UID: "d63faf5c-d054-4828-b211-c0f100f1f4ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.806322 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.812991 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63faf5c-d054-4828-b211-c0f100f1f4ca-kube-api-access-n6k9s" (OuterVolumeSpecName: "kube-api-access-n6k9s") pod "d63faf5c-d054-4828-b211-c0f100f1f4ca" (UID: "d63faf5c-d054-4828-b211-c0f100f1f4ca"). InnerVolumeSpecName "kube-api-access-n6k9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.868626 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d63faf5c-d054-4828-b211-c0f100f1f4ca" (UID: "d63faf5c-d054-4828-b211-c0f100f1f4ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.908470 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6k9s\" (UniqueName: \"kubernetes.io/projected/d63faf5c-d054-4828-b211-c0f100f1f4ca-kube-api-access-n6k9s\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:57 crc kubenswrapper[4870]: I0130 09:16:57.908517 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63faf5c-d054-4828-b211-c0f100f1f4ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:58 crc kubenswrapper[4870]: I0130 09:16:58.615272 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjl7n" Jan 30 09:16:58 crc kubenswrapper[4870]: I0130 09:16:58.647618 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjl7n"] Jan 30 09:16:58 crc kubenswrapper[4870]: I0130 09:16:58.659900 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jjl7n"] Jan 30 09:17:00 crc kubenswrapper[4870]: I0130 09:17:00.085387 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" path="/var/lib/kubelet/pods/d63faf5c-d054-4828-b211-c0f100f1f4ca/volumes" Jan 30 09:17:25 crc kubenswrapper[4870]: I0130 09:17:25.250320 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:17:25 crc kubenswrapper[4870]: I0130 09:17:25.251339 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.127410 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xltn8"] Jan 30 09:17:43 crc kubenswrapper[4870]: E0130 09:17:43.129684 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="extract-utilities" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.129730 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="extract-utilities" Jan 30 09:17:43 crc kubenswrapper[4870]: E0130 09:17:43.129774 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="extract-content" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.129800 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="extract-content" Jan 30 09:17:43 crc kubenswrapper[4870]: E0130 09:17:43.129822 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="extract-utilities" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.129831 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="extract-utilities" Jan 30 09:17:43 crc kubenswrapper[4870]: E0130 09:17:43.129847 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="extract-content" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.129856 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="extract-content" Jan 30 09:17:43 crc kubenswrapper[4870]: E0130 09:17:43.129921 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="registry-server" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.129929 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="registry-server" Jan 30 09:17:43 crc kubenswrapper[4870]: E0130 09:17:43.129939 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="registry-server" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.129948 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="registry-server" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.130215 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad36608-62a8-434a-899e-2383285678ba" containerName="registry-server" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.130236 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63faf5c-d054-4828-b211-c0f100f1f4ca" containerName="registry-server" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.133326 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.147115 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xltn8"] Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.225485 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-utilities\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.225654 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788fg\" (UniqueName: \"kubernetes.io/projected/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-kube-api-access-788fg\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.226541 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-catalog-content\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.328339 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-utilities\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.328432 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-788fg\" (UniqueName: \"kubernetes.io/projected/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-kube-api-access-788fg\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.328511 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-catalog-content\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.328790 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-utilities\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.328835 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-catalog-content\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.361344 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-788fg\" (UniqueName: \"kubernetes.io/projected/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-kube-api-access-788fg\") pod \"community-operators-xltn8\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:43 crc kubenswrapper[4870]: I0130 09:17:43.465247 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:44 crc kubenswrapper[4870]: I0130 09:17:44.070266 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xltn8"] Jan 30 09:17:44 crc kubenswrapper[4870]: I0130 09:17:44.122909 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerStarted","Data":"540d574ff699baeb20b5d5ca0c0e348d73245fc1b9ba622032b240f6bcf3e045"} Jan 30 09:17:45 crc kubenswrapper[4870]: I0130 09:17:45.132528 4870 generic.go:334] "Generic (PLEG): container finished" podID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerID="01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646" exitCode=0 Jan 30 09:17:45 crc kubenswrapper[4870]: I0130 09:17:45.132594 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerDied","Data":"01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646"} Jan 30 09:17:45 crc kubenswrapper[4870]: I0130 09:17:45.135323 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:17:47 crc kubenswrapper[4870]: I0130 09:17:47.162415 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerStarted","Data":"b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43"} Jan 30 09:17:48 crc kubenswrapper[4870]: I0130 09:17:48.177775 4870 generic.go:334] "Generic (PLEG): container finished" podID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerID="b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43" exitCode=0 Jan 30 09:17:48 crc kubenswrapper[4870]: I0130 09:17:48.177843 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerDied","Data":"b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43"} Jan 30 09:17:49 crc kubenswrapper[4870]: I0130 09:17:49.190614 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerStarted","Data":"8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067"} Jan 30 09:17:49 crc kubenswrapper[4870]: I0130 09:17:49.215866 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xltn8" podStartSLOduration=2.757683645 podStartE2EDuration="6.215843675s" podCreationTimestamp="2026-01-30 09:17:43 +0000 UTC" firstStartedPulling="2026-01-30 09:17:45.135115956 +0000 UTC m=+4103.830663065" lastFinishedPulling="2026-01-30 09:17:48.593275986 +0000 UTC m=+4107.288823095" observedRunningTime="2026-01-30 09:17:49.210040994 +0000 UTC m=+4107.905588103" watchObservedRunningTime="2026-01-30 09:17:49.215843675 +0000 UTC m=+4107.911390784" Jan 30 09:17:53 crc kubenswrapper[4870]: I0130 09:17:53.466179 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:53 crc kubenswrapper[4870]: I0130 09:17:53.466614 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:53 crc kubenswrapper[4870]: I0130 09:17:53.523824 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:54 crc kubenswrapper[4870]: I0130 09:17:54.291424 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:54 crc kubenswrapper[4870]: I0130 09:17:54.357171 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xltn8"] Jan 30 09:17:55 crc kubenswrapper[4870]: I0130 09:17:55.249909 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:17:55 crc kubenswrapper[4870]: I0130 09:17:55.249970 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:17:56 crc kubenswrapper[4870]: I0130 09:17:56.255323 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xltn8" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="registry-server" containerID="cri-o://8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067" gracePeriod=2 Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.058144 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.102276 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-catalog-content\") pod \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.102533 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-788fg\" (UniqueName: \"kubernetes.io/projected/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-kube-api-access-788fg\") pod \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.102658 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-utilities\") pod \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\" (UID: \"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6\") " Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.104607 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-utilities" (OuterVolumeSpecName: "utilities") pod "946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" (UID: "946c4108-ce02-48eb-b8de-4ac9d6b4e6e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.116817 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-kube-api-access-788fg" (OuterVolumeSpecName: "kube-api-access-788fg") pod "946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" (UID: "946c4108-ce02-48eb-b8de-4ac9d6b4e6e6"). InnerVolumeSpecName "kube-api-access-788fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.158530 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" (UID: "946c4108-ce02-48eb-b8de-4ac9d6b4e6e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.205173 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.205210 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-788fg\" (UniqueName: \"kubernetes.io/projected/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-kube-api-access-788fg\") on node \"crc\" DevicePath \"\"" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.205224 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.293847 4870 generic.go:334] "Generic (PLEG): container finished" podID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerID="8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067" exitCode=0 Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.293921 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerDied","Data":"8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067"} Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.293959 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xltn8" event={"ID":"946c4108-ce02-48eb-b8de-4ac9d6b4e6e6","Type":"ContainerDied","Data":"540d574ff699baeb20b5d5ca0c0e348d73245fc1b9ba622032b240f6bcf3e045"} Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.293978 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xltn8" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.293985 4870 scope.go:117] "RemoveContainer" containerID="8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.334379 4870 scope.go:117] "RemoveContainer" containerID="b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.356712 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xltn8"] Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.367781 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xltn8"] Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.374174 4870 scope.go:117] "RemoveContainer" containerID="01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.446736 4870 scope.go:117] "RemoveContainer" containerID="8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067" Jan 30 09:17:57 crc kubenswrapper[4870]: E0130 09:17:57.451027 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067\": container with ID starting with 8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067 not found: ID does not exist" containerID="8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.451067 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067"} err="failed to get container status \"8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067\": rpc error: code = NotFound desc = could not find container \"8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067\": container with ID starting with 8a2e5018eed8f0b1d1186c606d6a4aaf2a29e90c9c34afae09d54ab37b3a0067 not found: ID does not exist" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.451097 4870 scope.go:117] "RemoveContainer" containerID="b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43" Jan 30 09:17:57 crc kubenswrapper[4870]: E0130 09:17:57.451651 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43\": container with ID starting with b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43 not found: ID does not exist" containerID="b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.451692 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43"} err="failed to get container status \"b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43\": rpc error: code = NotFound desc = could not find container \"b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43\": container with ID starting with b645803acf590699cfe77d3b6594c393ad712f63f991d278bb28576169b9fe43 not found: ID does not exist" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.451711 4870 scope.go:117] "RemoveContainer" containerID="01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646" Jan 30 09:17:57 crc kubenswrapper[4870]: E0130 09:17:57.453119 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646\": container with ID starting with 01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646 not found: ID does not exist" containerID="01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646" Jan 30 09:17:57 crc kubenswrapper[4870]: I0130 09:17:57.453176 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646"} err="failed to get container status \"01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646\": rpc error: code = NotFound desc = could not find container \"01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646\": container with ID starting with 01ea8007b3bb0cbc4e47aa121bb382d34da19f0d27e1f3c77abeddb5f819b646 not found: ID does not exist" Jan 30 09:17:58 crc kubenswrapper[4870]: I0130 09:17:58.089197 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" path="/var/lib/kubelet/pods/946c4108-ce02-48eb-b8de-4ac9d6b4e6e6/volumes" Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.249545 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.250137 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.250190 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.251113 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78ce0353b7c3323cf1e9811378a7fe1749f61d99675d2c6c610bade4f28a064b"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.251164 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://78ce0353b7c3323cf1e9811378a7fe1749f61d99675d2c6c610bade4f28a064b" gracePeriod=600 Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.615900 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="78ce0353b7c3323cf1e9811378a7fe1749f61d99675d2c6c610bade4f28a064b" exitCode=0 Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.616014 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"78ce0353b7c3323cf1e9811378a7fe1749f61d99675d2c6c610bade4f28a064b"} Jan 30 09:18:25 crc kubenswrapper[4870]: I0130 09:18:25.616537 4870 scope.go:117] "RemoveContainer" containerID="b46440b22ae9d2fa04397a53f0f4fc091b7e2f214fbae6daba34107e15d81271" Jan 30 09:18:26 crc kubenswrapper[4870]: I0130 09:18:26.631082 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27"} Jan 30 09:20:25 crc kubenswrapper[4870]: I0130 09:20:25.249579 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:20:25 crc kubenswrapper[4870]: I0130 09:20:25.250542 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:20:55 crc kubenswrapper[4870]: I0130 09:20:55.250093 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:20:55 crc kubenswrapper[4870]: I0130 09:20:55.250972 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.250187 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.251036 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.251121 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.252343 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.252437 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" gracePeriod=600 Jan 30 09:21:25 crc kubenswrapper[4870]: E0130 09:21:25.384276 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.631759 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" exitCode=0 Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.631805 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27"} Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.631842 4870 scope.go:117] "RemoveContainer" containerID="78ce0353b7c3323cf1e9811378a7fe1749f61d99675d2c6c610bade4f28a064b" Jan 30 09:21:25 crc kubenswrapper[4870]: I0130 09:21:25.632537 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:21:25 crc kubenswrapper[4870]: E0130 09:21:25.632916 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:21:40 crc kubenswrapper[4870]: I0130 09:21:40.075554 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:21:40 crc kubenswrapper[4870]: E0130 09:21:40.076623 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:21:51 crc kubenswrapper[4870]: I0130 09:21:51.075531 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:21:51 crc kubenswrapper[4870]: E0130 09:21:51.076950 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:22:04 crc kubenswrapper[4870]: I0130 09:22:04.074985 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:22:04 crc kubenswrapper[4870]: E0130 09:22:04.075819 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.597507 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvdlz"] Jan 30 09:22:07 crc kubenswrapper[4870]: E0130 09:22:07.598762 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="extract-content" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.598783 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="extract-content" Jan 30 09:22:07 crc kubenswrapper[4870]: E0130 09:22:07.598808 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="registry-server" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.598822 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="registry-server" Jan 30 09:22:07 crc kubenswrapper[4870]: E0130 09:22:07.598864 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="extract-utilities" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.598898 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="extract-utilities" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.599281 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="946c4108-ce02-48eb-b8de-4ac9d6b4e6e6" containerName="registry-server" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.601905 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.610864 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvdlz"] Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.632370 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cbc\" (UniqueName: \"kubernetes.io/projected/557c0167-03e1-4176-89dc-88cbef924f2d-kube-api-access-b2cbc\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.632433 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-utilities\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.632540 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-catalog-content\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.735058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cbc\" (UniqueName: \"kubernetes.io/projected/557c0167-03e1-4176-89dc-88cbef924f2d-kube-api-access-b2cbc\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.735135 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-utilities\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.735240 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-catalog-content\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.735928 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-catalog-content\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.736507 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-utilities\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.768027 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cbc\" (UniqueName: \"kubernetes.io/projected/557c0167-03e1-4176-89dc-88cbef924f2d-kube-api-access-b2cbc\") pod \"redhat-marketplace-pvdlz\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:07 crc kubenswrapper[4870]: I0130 09:22:07.951465 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:08 crc kubenswrapper[4870]: I0130 09:22:08.458496 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvdlz"] Jan 30 09:22:08 crc kubenswrapper[4870]: W0130 09:22:08.462108 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557c0167_03e1_4176_89dc_88cbef924f2d.slice/crio-0aa9bcaea6ba301b86069968b571da51ccce8795756e555aa4d98a6aa66c8021 WatchSource:0}: Error finding container 0aa9bcaea6ba301b86069968b571da51ccce8795756e555aa4d98a6aa66c8021: Status 404 returned error can't find the container with id 0aa9bcaea6ba301b86069968b571da51ccce8795756e555aa4d98a6aa66c8021 Jan 30 09:22:09 crc kubenswrapper[4870]: I0130 09:22:09.124635 4870 generic.go:334] "Generic (PLEG): container finished" podID="557c0167-03e1-4176-89dc-88cbef924f2d" containerID="85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67" exitCode=0 Jan 30 09:22:09 crc kubenswrapper[4870]: I0130 09:22:09.125024 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerDied","Data":"85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67"} Jan 30 09:22:09 crc kubenswrapper[4870]: I0130 09:22:09.125056 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerStarted","Data":"0aa9bcaea6ba301b86069968b571da51ccce8795756e555aa4d98a6aa66c8021"} Jan 30 09:22:10 crc kubenswrapper[4870]: I0130 09:22:10.135853 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerStarted","Data":"2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc"} Jan 30 09:22:11 crc kubenswrapper[4870]: I0130 09:22:11.147542 4870 generic.go:334] "Generic (PLEG): container finished" podID="557c0167-03e1-4176-89dc-88cbef924f2d" containerID="2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc" exitCode=0 Jan 30 09:22:11 crc kubenswrapper[4870]: I0130 09:22:11.147762 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerDied","Data":"2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc"} Jan 30 09:22:12 crc kubenswrapper[4870]: I0130 09:22:12.160237 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerStarted","Data":"1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6"} Jan 30 09:22:12 crc kubenswrapper[4870]: I0130 09:22:12.193413 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pvdlz" podStartSLOduration=2.754915098 podStartE2EDuration="5.193391707s" podCreationTimestamp="2026-01-30 09:22:07 +0000 UTC" firstStartedPulling="2026-01-30 09:22:09.128734253 +0000 UTC m=+4367.824281362" lastFinishedPulling="2026-01-30 09:22:11.567210852 +0000 UTC m=+4370.262757971" observedRunningTime="2026-01-30 09:22:12.180821514 +0000 UTC m=+4370.876368623" watchObservedRunningTime="2026-01-30 09:22:12.193391707 +0000 UTC m=+4370.888938816" Jan 30 09:22:15 crc kubenswrapper[4870]: I0130 09:22:15.074623 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:22:15 crc kubenswrapper[4870]: E0130 09:22:15.075411 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:22:17 crc kubenswrapper[4870]: I0130 09:22:17.952249 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:17 crc kubenswrapper[4870]: I0130 09:22:17.952909 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:17 crc kubenswrapper[4870]: I0130 09:22:17.996782 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:18 crc kubenswrapper[4870]: I0130 09:22:18.269066 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:18 crc kubenswrapper[4870]: I0130 09:22:18.323384 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvdlz"] Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.234006 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvdlz" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="registry-server" containerID="cri-o://1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6" gracePeriod=2 Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.765739 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.869214 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2cbc\" (UniqueName: \"kubernetes.io/projected/557c0167-03e1-4176-89dc-88cbef924f2d-kube-api-access-b2cbc\") pod \"557c0167-03e1-4176-89dc-88cbef924f2d\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.869371 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-utilities\") pod \"557c0167-03e1-4176-89dc-88cbef924f2d\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.869444 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-catalog-content\") pod \"557c0167-03e1-4176-89dc-88cbef924f2d\" (UID: \"557c0167-03e1-4176-89dc-88cbef924f2d\") " Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.870387 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-utilities" (OuterVolumeSpecName: "utilities") pod "557c0167-03e1-4176-89dc-88cbef924f2d" (UID: "557c0167-03e1-4176-89dc-88cbef924f2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.876225 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c0167-03e1-4176-89dc-88cbef924f2d-kube-api-access-b2cbc" (OuterVolumeSpecName: "kube-api-access-b2cbc") pod "557c0167-03e1-4176-89dc-88cbef924f2d" (UID: "557c0167-03e1-4176-89dc-88cbef924f2d"). InnerVolumeSpecName "kube-api-access-b2cbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.893502 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "557c0167-03e1-4176-89dc-88cbef924f2d" (UID: "557c0167-03e1-4176-89dc-88cbef924f2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.973101 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.973155 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2cbc\" (UniqueName: \"kubernetes.io/projected/557c0167-03e1-4176-89dc-88cbef924f2d-kube-api-access-b2cbc\") on node \"crc\" DevicePath \"\"" Jan 30 09:22:20 crc kubenswrapper[4870]: I0130 09:22:20.973178 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557c0167-03e1-4176-89dc-88cbef924f2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.253139 4870 generic.go:334] "Generic (PLEG): container finished" podID="557c0167-03e1-4176-89dc-88cbef924f2d" containerID="1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6" exitCode=0 Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.253204 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerDied","Data":"1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6"} Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.253243 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvdlz" event={"ID":"557c0167-03e1-4176-89dc-88cbef924f2d","Type":"ContainerDied","Data":"0aa9bcaea6ba301b86069968b571da51ccce8795756e555aa4d98a6aa66c8021"} Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.253265 4870 scope.go:117] "RemoveContainer" containerID="1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.253448 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvdlz" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.294613 4870 scope.go:117] "RemoveContainer" containerID="2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.314933 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvdlz"] Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.316734 4870 scope.go:117] "RemoveContainer" containerID="85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.327319 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvdlz"] Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.372833 4870 scope.go:117] "RemoveContainer" containerID="1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6" Jan 30 09:22:21 crc kubenswrapper[4870]: E0130 09:22:21.373492 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6\": container with ID starting with 1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6 not found: ID does not exist" containerID="1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.373524 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6"} err="failed to get container status \"1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6\": rpc error: code = NotFound desc = could not find container \"1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6\": container with ID starting with 1e69adb24152ee0cb8f50d991a0404e146f90566276c56f5dafb33d4ffb7ffa6 not found: ID does not exist" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.373565 4870 scope.go:117] "RemoveContainer" containerID="2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc" Jan 30 09:22:21 crc kubenswrapper[4870]: E0130 09:22:21.373958 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc\": container with ID starting with 2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc not found: ID does not exist" containerID="2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.374004 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc"} err="failed to get container status \"2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc\": rpc error: code = NotFound desc = could not find container \"2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc\": container with ID starting with 2c0a1e1837f6a0f60772e976ef229a93da2a31800f3a31dcc7e73fd386908bfc not found: ID does not exist" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.374034 4870 scope.go:117] "RemoveContainer" containerID="85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67" Jan 30 09:22:21 crc kubenswrapper[4870]: E0130 09:22:21.374354 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67\": container with ID starting with 85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67 not found: ID does not exist" containerID="85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67" Jan 30 09:22:21 crc kubenswrapper[4870]: I0130 09:22:21.374394 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67"} err="failed to get container status \"85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67\": rpc error: code = NotFound desc = could not find container \"85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67\": container with ID starting with 85d75aa3df5ba51083766d98dab3c127cb48ae1d8b08b9050b15950b6ccc4c67 not found: ID does not exist" Jan 30 09:22:22 crc kubenswrapper[4870]: I0130 09:22:22.093928 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" path="/var/lib/kubelet/pods/557c0167-03e1-4176-89dc-88cbef924f2d/volumes" Jan 30 09:22:27 crc kubenswrapper[4870]: I0130 09:22:27.074973 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:22:27 crc kubenswrapper[4870]: E0130 09:22:27.075632 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:22:38 crc kubenswrapper[4870]: I0130 09:22:38.075012 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:22:38 crc kubenswrapper[4870]: E0130 09:22:38.075612 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:22:38 crc kubenswrapper[4870]: I0130 09:22:38.962934 4870 scope.go:117] "RemoveContainer" containerID="03a4e150a1883a14201e377adab34fa66160798fa54ff5e83c49cff343c943f0" Jan 30 09:22:39 crc kubenswrapper[4870]: I0130 09:22:39.004308 4870 scope.go:117] "RemoveContainer" containerID="9d90852ba42919283ddf463edcc40bc4b7429a7159617021ec5a64efb04dbac5" Jan 30 09:22:39 crc kubenswrapper[4870]: I0130 09:22:39.047923 4870 scope.go:117] "RemoveContainer" containerID="9ce88b36b7f8bb4c84d2e0cb990ff2dfc0b7ef3e5a70fa171000086079fff96f" Jan 30 09:22:49 crc kubenswrapper[4870]: I0130 09:22:49.075412 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:22:49 crc kubenswrapper[4870]: E0130 09:22:49.076258 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:23:02 crc kubenswrapper[4870]: I0130 09:23:02.089238 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:23:02 crc kubenswrapper[4870]: E0130 09:23:02.090239 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:23:15 crc kubenswrapper[4870]: I0130 09:23:15.074528 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:23:15 crc kubenswrapper[4870]: E0130 09:23:15.075473 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:23:26 crc kubenswrapper[4870]: I0130 09:23:26.075349 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:23:26 crc kubenswrapper[4870]: E0130 09:23:26.076188 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:23:38 crc kubenswrapper[4870]: I0130 09:23:38.075539 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:23:38 crc kubenswrapper[4870]: E0130 09:23:38.076433 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:23:50 crc kubenswrapper[4870]: I0130 09:23:50.075495 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:23:50 crc kubenswrapper[4870]: E0130 09:23:50.076370 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:24:05 crc kubenswrapper[4870]: I0130 09:24:05.076041 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:24:05 crc kubenswrapper[4870]: E0130 09:24:05.076831 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:24:17 crc kubenswrapper[4870]: I0130 09:24:17.075335 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:24:17 crc kubenswrapper[4870]: E0130 09:24:17.076274 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:24:32 crc kubenswrapper[4870]: I0130 09:24:32.082102 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:24:32 crc kubenswrapper[4870]: E0130 09:24:32.083040 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:24:46 crc kubenswrapper[4870]: I0130 09:24:46.074564 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:24:46 crc kubenswrapper[4870]: E0130 09:24:46.075635 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:24:57 crc kubenswrapper[4870]: I0130 09:24:57.076236 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:24:57 crc kubenswrapper[4870]: E0130 09:24:57.077589 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:25:08 crc kubenswrapper[4870]: I0130 09:25:08.074974 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:25:08 crc kubenswrapper[4870]: E0130 09:25:08.075764 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:25:22 crc kubenswrapper[4870]: I0130 09:25:22.082183 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:25:22 crc kubenswrapper[4870]: E0130 09:25:22.082917 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:25:34 crc kubenswrapper[4870]: I0130 09:25:34.075085 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:25:34 crc kubenswrapper[4870]: E0130 09:25:34.075806 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:25:46 crc kubenswrapper[4870]: I0130 09:25:46.074648 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:25:46 crc kubenswrapper[4870]: E0130 09:25:46.075527 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:26:01 crc kubenswrapper[4870]: I0130 09:26:01.075146 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:26:01 crc kubenswrapper[4870]: E0130 09:26:01.075993 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:26:14 crc kubenswrapper[4870]: I0130 09:26:14.074670 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:26:14 crc kubenswrapper[4870]: E0130 09:26:14.075423 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:26:29 crc kubenswrapper[4870]: I0130 09:26:29.075226 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:26:29 crc kubenswrapper[4870]: I0130 09:26:29.776942 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"795d2451707be8d2499e0a626a854aa75de48e7e4bf87e79653cebe4102927af"} Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.903575 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrtns"] Jan 30 09:26:30 crc kubenswrapper[4870]: E0130 09:26:30.904569 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="extract-utilities" Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.904585 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="extract-utilities" Jan 30 09:26:30 crc kubenswrapper[4870]: E0130 09:26:30.904623 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="extract-content" Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.904631 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="extract-content" Jan 30 09:26:30 crc kubenswrapper[4870]: E0130 09:26:30.904647 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="registry-server" Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.904654 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="registry-server" Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.904859 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="557c0167-03e1-4176-89dc-88cbef924f2d" containerName="registry-server" Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.906319 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:30 crc kubenswrapper[4870]: I0130 09:26:30.917340 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrtns"] Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.018808 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-utilities\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.019017 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-catalog-content\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.019351 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88wm\" (UniqueName: \"kubernetes.io/projected/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-kube-api-access-f88wm\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.120757 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-utilities\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.120849 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-catalog-content\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.120991 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88wm\" (UniqueName: \"kubernetes.io/projected/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-kube-api-access-f88wm\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.121268 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-utilities\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.121507 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-catalog-content\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.148973 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88wm\" (UniqueName: \"kubernetes.io/projected/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-kube-api-access-f88wm\") pod \"redhat-operators-qrtns\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.225896 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.744413 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrtns"] Jan 30 09:26:31 crc kubenswrapper[4870]: I0130 09:26:31.798380 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerStarted","Data":"0660a37d9a474051d2be6f69ed9530a611f3c39fa1b6766160ab0c3404bd0861"} Jan 30 09:26:32 crc kubenswrapper[4870]: I0130 09:26:32.809089 4870 generic.go:334] "Generic (PLEG): container finished" podID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerID="08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2" exitCode=0 Jan 30 09:26:32 crc kubenswrapper[4870]: I0130 09:26:32.809534 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerDied","Data":"08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2"} Jan 30 09:26:32 crc kubenswrapper[4870]: I0130 09:26:32.811949 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:26:33 crc kubenswrapper[4870]: I0130 09:26:33.820140 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerStarted","Data":"25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f"} Jan 30 09:26:37 crc kubenswrapper[4870]: I0130 09:26:37.862255 4870 generic.go:334] "Generic (PLEG): container finished" podID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerID="25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f" exitCode=0 Jan 30 09:26:37 crc kubenswrapper[4870]: I0130 09:26:37.862345 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerDied","Data":"25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f"} Jan 30 09:26:38 crc kubenswrapper[4870]: I0130 09:26:38.874564 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerStarted","Data":"10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd"} Jan 30 09:26:38 crc kubenswrapper[4870]: I0130 09:26:38.908252 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrtns" podStartSLOduration=3.466153904 podStartE2EDuration="8.908230892s" podCreationTimestamp="2026-01-30 09:26:30 +0000 UTC" firstStartedPulling="2026-01-30 09:26:32.811730439 +0000 UTC m=+4631.507277548" lastFinishedPulling="2026-01-30 09:26:38.253807427 +0000 UTC m=+4636.949354536" observedRunningTime="2026-01-30 09:26:38.895287019 +0000 UTC m=+4637.590834158" watchObservedRunningTime="2026-01-30 09:26:38.908230892 +0000 UTC m=+4637.603778011" Jan 30 09:26:41 crc kubenswrapper[4870]: I0130 09:26:41.226905 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:41 crc kubenswrapper[4870]: I0130 09:26:41.227202 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:42 crc kubenswrapper[4870]: I0130 09:26:42.293172 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrtns" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="registry-server" probeResult="failure" output=< Jan 30 09:26:42 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Jan 30 09:26:42 crc kubenswrapper[4870]: > Jan 30 09:26:51 crc kubenswrapper[4870]: I0130 09:26:51.275359 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:51 crc kubenswrapper[4870]: I0130 09:26:51.345855 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:51 crc kubenswrapper[4870]: I0130 09:26:51.530483 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrtns"] Jan 30 09:26:52 crc kubenswrapper[4870]: I0130 09:26:52.998467 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrtns" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="registry-server" containerID="cri-o://10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd" gracePeriod=2 Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.502165 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.624780 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-utilities\") pod \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.624943 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-catalog-content\") pod \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.625229 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f88wm\" (UniqueName: \"kubernetes.io/projected/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-kube-api-access-f88wm\") pod \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\" (UID: \"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d\") " Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.625646 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-utilities" (OuterVolumeSpecName: "utilities") pod "4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" (UID: "4d55e5ff-73ca-40fa-9bbc-f032d7195b6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.626026 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.630808 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-kube-api-access-f88wm" (OuterVolumeSpecName: "kube-api-access-f88wm") pod "4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" (UID: "4d55e5ff-73ca-40fa-9bbc-f032d7195b6d"). InnerVolumeSpecName "kube-api-access-f88wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.727696 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f88wm\" (UniqueName: \"kubernetes.io/projected/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-kube-api-access-f88wm\") on node \"crc\" DevicePath \"\"" Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.756437 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" (UID: "4d55e5ff-73ca-40fa-9bbc-f032d7195b6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:26:53 crc kubenswrapper[4870]: I0130 09:26:53.830010 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.008827 4870 generic.go:334] "Generic (PLEG): container finished" podID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerID="10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd" exitCode=0 Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.008892 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerDied","Data":"10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd"} Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.008936 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrtns" event={"ID":"4d55e5ff-73ca-40fa-9bbc-f032d7195b6d","Type":"ContainerDied","Data":"0660a37d9a474051d2be6f69ed9530a611f3c39fa1b6766160ab0c3404bd0861"} Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.008961 4870 scope.go:117] "RemoveContainer" containerID="10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.008983 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrtns" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.039918 4870 scope.go:117] "RemoveContainer" containerID="25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.057920 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrtns"] Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.069073 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrtns"] Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.072269 4870 scope.go:117] "RemoveContainer" containerID="08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.089441 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" path="/var/lib/kubelet/pods/4d55e5ff-73ca-40fa-9bbc-f032d7195b6d/volumes" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.114868 4870 scope.go:117] "RemoveContainer" containerID="10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd" Jan 30 09:26:54 crc kubenswrapper[4870]: E0130 09:26:54.115463 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd\": container with ID starting with 10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd not found: ID does not exist" containerID="10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.115513 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd"} err="failed to get container status \"10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd\": rpc error: code = NotFound desc = could not find container \"10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd\": container with ID starting with 10f1aa6ed3a026db9825b6e2ad444251698959010b5269ca5d7e71c3c67a68cd not found: ID does not exist" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.115544 4870 scope.go:117] "RemoveContainer" containerID="25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f" Jan 30 09:26:54 crc kubenswrapper[4870]: E0130 09:26:54.116016 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f\": container with ID starting with 25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f not found: ID does not exist" containerID="25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.116070 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f"} err="failed to get container status \"25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f\": rpc error: code = NotFound desc = could not find container \"25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f\": container with ID starting with 25425c03452a27224c3adc16dcb78d6dbea2801599a50fce22f2fc38ab82723f not found: ID does not exist" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.116088 4870 scope.go:117] "RemoveContainer" containerID="08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2" Jan 30 09:26:54 crc kubenswrapper[4870]: E0130 09:26:54.116501 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2\": container with ID starting with 08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2 not found: ID does not exist" containerID="08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2" Jan 30 09:26:54 crc kubenswrapper[4870]: I0130 09:26:54.116529 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2"} err="failed to get container status \"08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2\": rpc error: code = NotFound desc = could not find container \"08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2\": container with ID starting with 08325f1a246b7fc3549417df49a465c23ef614d6d2f42b2a262b961d03a3c1b2 not found: ID does not exist" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.194377 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8l4jx"] Jan 30 09:27:05 crc kubenswrapper[4870]: E0130 09:27:05.195492 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="extract-content" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.195511 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="extract-content" Jan 30 09:27:05 crc kubenswrapper[4870]: E0130 09:27:05.195529 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="extract-utilities" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.195536 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="extract-utilities" Jan 30 09:27:05 crc kubenswrapper[4870]: E0130 09:27:05.195546 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="registry-server" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.195553 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="registry-server" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.195829 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d55e5ff-73ca-40fa-9bbc-f032d7195b6d" containerName="registry-server" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.197649 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.228019 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8l4jx"] Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.400563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-catalog-content\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.400641 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-utilities\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.400671 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9gkq\" (UniqueName: \"kubernetes.io/projected/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-kube-api-access-b9gkq\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.502756 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-catalog-content\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.502889 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-utilities\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.503086 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9gkq\" (UniqueName: \"kubernetes.io/projected/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-kube-api-access-b9gkq\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.503556 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-utilities\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.503637 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-catalog-content\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.526784 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9gkq\" (UniqueName: \"kubernetes.io/projected/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-kube-api-access-b9gkq\") pod \"certified-operators-8l4jx\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:05 crc kubenswrapper[4870]: I0130 09:27:05.822503 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:06 crc kubenswrapper[4870]: I0130 09:27:06.353695 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8l4jx"] Jan 30 09:27:07 crc kubenswrapper[4870]: I0130 09:27:07.131037 4870 generic.go:334] "Generic (PLEG): container finished" podID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerID="983be6322e2103d41550f8f7f6e0e561cbc3dbe9269392d7c5c2526c4f9b5d63" exitCode=0 Jan 30 09:27:07 crc kubenswrapper[4870]: I0130 09:27:07.131120 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerDied","Data":"983be6322e2103d41550f8f7f6e0e561cbc3dbe9269392d7c5c2526c4f9b5d63"} Jan 30 09:27:07 crc kubenswrapper[4870]: I0130 09:27:07.131356 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerStarted","Data":"d79530e1301adb8383a69357cf81b26a5838467cb60dbf7859d6ecbdfa5dea14"} Jan 30 09:27:08 crc kubenswrapper[4870]: I0130 09:27:08.142451 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerStarted","Data":"644b76bd8bb66c6ebaa0efcd433e78ebb9485645651bbb2b9c8e69d9cca2a794"} Jan 30 09:27:10 crc kubenswrapper[4870]: I0130 09:27:10.160699 4870 generic.go:334] "Generic (PLEG): container finished" podID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerID="644b76bd8bb66c6ebaa0efcd433e78ebb9485645651bbb2b9c8e69d9cca2a794" exitCode=0 Jan 30 09:27:10 crc kubenswrapper[4870]: I0130 09:27:10.161043 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerDied","Data":"644b76bd8bb66c6ebaa0efcd433e78ebb9485645651bbb2b9c8e69d9cca2a794"} Jan 30 09:27:11 crc kubenswrapper[4870]: I0130 09:27:11.171424 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerStarted","Data":"59ad2bf5c789adc8979794468c11cf52aedf813fac54e74a61ef08e9e9826d50"} Jan 30 09:27:11 crc kubenswrapper[4870]: I0130 09:27:11.193452 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8l4jx" podStartSLOduration=2.7156367340000003 podStartE2EDuration="6.193435496s" podCreationTimestamp="2026-01-30 09:27:05 +0000 UTC" firstStartedPulling="2026-01-30 09:27:07.133190114 +0000 UTC m=+4665.828737223" lastFinishedPulling="2026-01-30 09:27:10.610988876 +0000 UTC m=+4669.306535985" observedRunningTime="2026-01-30 09:27:11.190952609 +0000 UTC m=+4669.886499728" watchObservedRunningTime="2026-01-30 09:27:11.193435496 +0000 UTC m=+4669.888982605" Jan 30 09:27:15 crc kubenswrapper[4870]: I0130 09:27:15.823632 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:15 crc kubenswrapper[4870]: I0130 09:27:15.824229 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:15 crc kubenswrapper[4870]: I0130 09:27:15.877697 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:16 crc kubenswrapper[4870]: I0130 09:27:16.265477 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:16 crc kubenswrapper[4870]: I0130 09:27:16.317665 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8l4jx"] Jan 30 09:27:18 crc kubenswrapper[4870]: I0130 09:27:18.240798 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8l4jx" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="registry-server" containerID="cri-o://59ad2bf5c789adc8979794468c11cf52aedf813fac54e74a61ef08e9e9826d50" gracePeriod=2 Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.251670 4870 generic.go:334] "Generic (PLEG): container finished" podID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerID="59ad2bf5c789adc8979794468c11cf52aedf813fac54e74a61ef08e9e9826d50" exitCode=0 Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.251740 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerDied","Data":"59ad2bf5c789adc8979794468c11cf52aedf813fac54e74a61ef08e9e9826d50"} Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.604347 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.632462 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-catalog-content\") pod \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.632624 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-utilities\") pod \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.632694 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9gkq\" (UniqueName: \"kubernetes.io/projected/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-kube-api-access-b9gkq\") pod \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\" (UID: \"247e00dc-e547-4b0c-802d-7e7ef8dd8b58\") " Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.633689 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-utilities" (OuterVolumeSpecName: "utilities") pod "247e00dc-e547-4b0c-802d-7e7ef8dd8b58" (UID: "247e00dc-e547-4b0c-802d-7e7ef8dd8b58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.647293 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-kube-api-access-b9gkq" (OuterVolumeSpecName: "kube-api-access-b9gkq") pod "247e00dc-e547-4b0c-802d-7e7ef8dd8b58" (UID: "247e00dc-e547-4b0c-802d-7e7ef8dd8b58"). InnerVolumeSpecName "kube-api-access-b9gkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.699920 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "247e00dc-e547-4b0c-802d-7e7ef8dd8b58" (UID: "247e00dc-e547-4b0c-802d-7e7ef8dd8b58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.734846 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.734894 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9gkq\" (UniqueName: \"kubernetes.io/projected/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-kube-api-access-b9gkq\") on node \"crc\" DevicePath \"\"" Jan 30 09:27:19 crc kubenswrapper[4870]: I0130 09:27:19.734906 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247e00dc-e547-4b0c-802d-7e7ef8dd8b58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.263950 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l4jx" event={"ID":"247e00dc-e547-4b0c-802d-7e7ef8dd8b58","Type":"ContainerDied","Data":"d79530e1301adb8383a69357cf81b26a5838467cb60dbf7859d6ecbdfa5dea14"} Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.264031 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l4jx" Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.264339 4870 scope.go:117] "RemoveContainer" containerID="59ad2bf5c789adc8979794468c11cf52aedf813fac54e74a61ef08e9e9826d50" Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.293254 4870 scope.go:117] "RemoveContainer" containerID="644b76bd8bb66c6ebaa0efcd433e78ebb9485645651bbb2b9c8e69d9cca2a794" Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.294160 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8l4jx"] Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.321940 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8l4jx"] Jan 30 09:27:20 crc kubenswrapper[4870]: I0130 09:27:20.323678 4870 scope.go:117] "RemoveContainer" containerID="983be6322e2103d41550f8f7f6e0e561cbc3dbe9269392d7c5c2526c4f9b5d63" Jan 30 09:27:22 crc kubenswrapper[4870]: I0130 09:27:22.087774 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" path="/var/lib/kubelet/pods/247e00dc-e547-4b0c-802d-7e7ef8dd8b58/volumes" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.419301 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btswt"] Jan 30 09:28:02 crc kubenswrapper[4870]: E0130 09:28:02.420323 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="registry-server" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.420340 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="registry-server" Jan 30 09:28:02 crc kubenswrapper[4870]: E0130 09:28:02.420355 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="extract-content" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.420363 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="extract-content" Jan 30 09:28:02 crc kubenswrapper[4870]: E0130 09:28:02.420389 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="extract-utilities" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.420399 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="extract-utilities" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.420656 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="247e00dc-e547-4b0c-802d-7e7ef8dd8b58" containerName="registry-server" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.422540 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.440133 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfpfj\" (UniqueName: \"kubernetes.io/projected/0cda615d-6f79-49fc-812e-28f590544089-kube-api-access-tfpfj\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.440302 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-catalog-content\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.440470 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-utilities\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.443923 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btswt"] Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.543001 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-utilities\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.543117 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfpfj\" (UniqueName: \"kubernetes.io/projected/0cda615d-6f79-49fc-812e-28f590544089-kube-api-access-tfpfj\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.543282 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-catalog-content\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.543686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-utilities\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.543808 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-catalog-content\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.574814 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfpfj\" (UniqueName: \"kubernetes.io/projected/0cda615d-6f79-49fc-812e-28f590544089-kube-api-access-tfpfj\") pod \"community-operators-btswt\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:02 crc kubenswrapper[4870]: I0130 09:28:02.769096 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:03 crc kubenswrapper[4870]: I0130 09:28:03.286338 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btswt"] Jan 30 09:28:03 crc kubenswrapper[4870]: I0130 09:28:03.680170 4870 generic.go:334] "Generic (PLEG): container finished" podID="0cda615d-6f79-49fc-812e-28f590544089" containerID="e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef" exitCode=0 Jan 30 09:28:03 crc kubenswrapper[4870]: I0130 09:28:03.680209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerDied","Data":"e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef"} Jan 30 09:28:03 crc kubenswrapper[4870]: I0130 09:28:03.680235 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerStarted","Data":"d1f2f6a834009ccba40130c3535b999d1113adab0a7f3036f55f87b8dcfc3789"} Jan 30 09:28:04 crc kubenswrapper[4870]: I0130 09:28:04.694048 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerStarted","Data":"7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d"} Jan 30 09:28:05 crc kubenswrapper[4870]: I0130 09:28:05.705994 4870 generic.go:334] "Generic (PLEG): container finished" podID="0cda615d-6f79-49fc-812e-28f590544089" containerID="7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d" exitCode=0 Jan 30 09:28:05 crc kubenswrapper[4870]: I0130 09:28:05.706044 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerDied","Data":"7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d"} Jan 30 09:28:06 crc kubenswrapper[4870]: I0130 09:28:06.717459 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerStarted","Data":"5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4"} Jan 30 09:28:06 crc kubenswrapper[4870]: I0130 09:28:06.742419 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btswt" podStartSLOduration=2.315169855 podStartE2EDuration="4.742401852s" podCreationTimestamp="2026-01-30 09:28:02 +0000 UTC" firstStartedPulling="2026-01-30 09:28:03.681724454 +0000 UTC m=+4722.377271563" lastFinishedPulling="2026-01-30 09:28:06.108956451 +0000 UTC m=+4724.804503560" observedRunningTime="2026-01-30 09:28:06.734757723 +0000 UTC m=+4725.430304842" watchObservedRunningTime="2026-01-30 09:28:06.742401852 +0000 UTC m=+4725.437948961" Jan 30 09:28:12 crc kubenswrapper[4870]: I0130 09:28:12.770139 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:12 crc kubenswrapper[4870]: I0130 09:28:12.770951 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:13 crc kubenswrapper[4870]: I0130 09:28:13.059080 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:13 crc kubenswrapper[4870]: I0130 09:28:13.842196 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:13 crc kubenswrapper[4870]: I0130 09:28:13.900535 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btswt"] Jan 30 09:28:15 crc kubenswrapper[4870]: I0130 09:28:15.808752 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btswt" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="registry-server" containerID="cri-o://5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4" gracePeriod=2 Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.311126 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.425919 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-catalog-content\") pod \"0cda615d-6f79-49fc-812e-28f590544089\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.426370 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfpfj\" (UniqueName: \"kubernetes.io/projected/0cda615d-6f79-49fc-812e-28f590544089-kube-api-access-tfpfj\") pod \"0cda615d-6f79-49fc-812e-28f590544089\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.426495 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-utilities\") pod \"0cda615d-6f79-49fc-812e-28f590544089\" (UID: \"0cda615d-6f79-49fc-812e-28f590544089\") " Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.427289 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-utilities" (OuterVolumeSpecName: "utilities") pod "0cda615d-6f79-49fc-812e-28f590544089" (UID: "0cda615d-6f79-49fc-812e-28f590544089"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.436185 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cda615d-6f79-49fc-812e-28f590544089-kube-api-access-tfpfj" (OuterVolumeSpecName: "kube-api-access-tfpfj") pod "0cda615d-6f79-49fc-812e-28f590544089" (UID: "0cda615d-6f79-49fc-812e-28f590544089"). InnerVolumeSpecName "kube-api-access-tfpfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.485762 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cda615d-6f79-49fc-812e-28f590544089" (UID: "0cda615d-6f79-49fc-812e-28f590544089"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.529286 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.529321 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfpfj\" (UniqueName: \"kubernetes.io/projected/0cda615d-6f79-49fc-812e-28f590544089-kube-api-access-tfpfj\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.529332 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cda615d-6f79-49fc-812e-28f590544089-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.823509 4870 generic.go:334] "Generic (PLEG): container finished" podID="0cda615d-6f79-49fc-812e-28f590544089" containerID="5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4" exitCode=0 Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.823603 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerDied","Data":"5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4"} Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.823676 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btswt" event={"ID":"0cda615d-6f79-49fc-812e-28f590544089","Type":"ContainerDied","Data":"d1f2f6a834009ccba40130c3535b999d1113adab0a7f3036f55f87b8dcfc3789"} Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.823711 4870 scope.go:117] "RemoveContainer" containerID="5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.823729 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btswt" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.874141 4870 scope.go:117] "RemoveContainer" containerID="7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.878381 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btswt"] Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.889275 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btswt"] Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.903690 4870 scope.go:117] "RemoveContainer" containerID="e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.966318 4870 scope.go:117] "RemoveContainer" containerID="5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4" Jan 30 09:28:16 crc kubenswrapper[4870]: E0130 09:28:16.966845 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4\": container with ID starting with 5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4 not found: ID does not exist" containerID="5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.966975 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4"} err="failed to get container status \"5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4\": rpc error: code = NotFound desc = could not find container \"5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4\": container with ID starting with 5d6616f5ad39822e7b0feb0d3252a9aefe485e126286e84b2af8bcf7d03012d4 not found: ID does not exist" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.967011 4870 scope.go:117] "RemoveContainer" containerID="7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d" Jan 30 09:28:16 crc kubenswrapper[4870]: E0130 09:28:16.967534 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d\": container with ID starting with 7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d not found: ID does not exist" containerID="7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.967576 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d"} err="failed to get container status \"7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d\": rpc error: code = NotFound desc = could not find container \"7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d\": container with ID starting with 7135d46b8f44718d9556e5e7f714ca0ee6bd5405089d670de1b83da549dbcc6d not found: ID does not exist" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.967601 4870 scope.go:117] "RemoveContainer" containerID="e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef" Jan 30 09:28:16 crc kubenswrapper[4870]: E0130 09:28:16.968163 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef\": container with ID starting with e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef not found: ID does not exist" containerID="e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef" Jan 30 09:28:16 crc kubenswrapper[4870]: I0130 09:28:16.968210 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef"} err="failed to get container status \"e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef\": rpc error: code = NotFound desc = could not find container \"e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef\": container with ID starting with e5e3604dc8b68af28314816ffb734f27577b48850481d2510bb0a559fe9a53ef not found: ID does not exist" Jan 30 09:28:18 crc kubenswrapper[4870]: I0130 09:28:18.087535 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cda615d-6f79-49fc-812e-28f590544089" path="/var/lib/kubelet/pods/0cda615d-6f79-49fc-812e-28f590544089/volumes" Jan 30 09:28:55 crc kubenswrapper[4870]: I0130 09:28:55.249105 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:28:55 crc kubenswrapper[4870]: I0130 09:28:55.249733 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:29:25 crc kubenswrapper[4870]: I0130 09:29:25.249283 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:29:25 crc kubenswrapper[4870]: I0130 09:29:25.249926 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.251154 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.251934 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.252014 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.253215 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"795d2451707be8d2499e0a626a854aa75de48e7e4bf87e79653cebe4102927af"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.253318 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://795d2451707be8d2499e0a626a854aa75de48e7e4bf87e79653cebe4102927af" gracePeriod=600 Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.795621 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="795d2451707be8d2499e0a626a854aa75de48e7e4bf87e79653cebe4102927af" exitCode=0 Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.795700 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"795d2451707be8d2499e0a626a854aa75de48e7e4bf87e79653cebe4102927af"} Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.796134 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40"} Jan 30 09:29:55 crc kubenswrapper[4870]: I0130 09:29:55.796161 4870 scope.go:117] "RemoveContainer" containerID="a0995ec52533f007b3aef607a4954202a0ae3e897240f345b324b987bbd17f27" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.158788 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf"] Jan 30 09:30:00 crc kubenswrapper[4870]: E0130 09:30:00.159665 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.159681 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4870]: E0130 09:30:00.159717 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.159726 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4870]: E0130 09:30:00.159763 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.159774 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.160038 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cda615d-6f79-49fc-812e-28f590544089" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.160931 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.163486 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.163710 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.176091 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf"] Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.220424 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9j52\" (UniqueName: \"kubernetes.io/projected/7370c9a2-2978-4149-8f1f-2c3686a18809-kube-api-access-z9j52\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.220692 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7370c9a2-2978-4149-8f1f-2c3686a18809-config-volume\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.220895 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7370c9a2-2978-4149-8f1f-2c3686a18809-secret-volume\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.322799 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9j52\" (UniqueName: \"kubernetes.io/projected/7370c9a2-2978-4149-8f1f-2c3686a18809-kube-api-access-z9j52\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.323810 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7370c9a2-2978-4149-8f1f-2c3686a18809-config-volume\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.324868 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7370c9a2-2978-4149-8f1f-2c3686a18809-config-volume\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.325064 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7370c9a2-2978-4149-8f1f-2c3686a18809-secret-volume\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.333503 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7370c9a2-2978-4149-8f1f-2c3686a18809-secret-volume\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.352950 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9j52\" (UniqueName: \"kubernetes.io/projected/7370c9a2-2978-4149-8f1f-2c3686a18809-kube-api-access-z9j52\") pod \"collect-profiles-29496090-qlbqf\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:00 crc kubenswrapper[4870]: I0130 09:30:00.500535 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:01 crc kubenswrapper[4870]: I0130 09:30:01.126860 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf"] Jan 30 09:30:01 crc kubenswrapper[4870]: I0130 09:30:01.929667 4870 generic.go:334] "Generic (PLEG): container finished" podID="7370c9a2-2978-4149-8f1f-2c3686a18809" containerID="d6806ea927b023231720616670fecd59d2c946289563c3ab71cac04e0e274c7f" exitCode=0 Jan 30 09:30:01 crc kubenswrapper[4870]: I0130 09:30:01.929722 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" event={"ID":"7370c9a2-2978-4149-8f1f-2c3686a18809","Type":"ContainerDied","Data":"d6806ea927b023231720616670fecd59d2c946289563c3ab71cac04e0e274c7f"} Jan 30 09:30:01 crc kubenswrapper[4870]: I0130 09:30:01.929756 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" event={"ID":"7370c9a2-2978-4149-8f1f-2c3686a18809","Type":"ContainerStarted","Data":"80cd502957099cb35bb72e89cba48705edf68362d02c1d28e48cacf34c0c3dbf"} Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.347838 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.456062 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7370c9a2-2978-4149-8f1f-2c3686a18809-secret-volume\") pod \"7370c9a2-2978-4149-8f1f-2c3686a18809\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.456194 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7370c9a2-2978-4149-8f1f-2c3686a18809-config-volume\") pod \"7370c9a2-2978-4149-8f1f-2c3686a18809\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.456307 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9j52\" (UniqueName: \"kubernetes.io/projected/7370c9a2-2978-4149-8f1f-2c3686a18809-kube-api-access-z9j52\") pod \"7370c9a2-2978-4149-8f1f-2c3686a18809\" (UID: \"7370c9a2-2978-4149-8f1f-2c3686a18809\") " Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.457305 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7370c9a2-2978-4149-8f1f-2c3686a18809-config-volume" (OuterVolumeSpecName: "config-volume") pod "7370c9a2-2978-4149-8f1f-2c3686a18809" (UID: "7370c9a2-2978-4149-8f1f-2c3686a18809"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.462253 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7370c9a2-2978-4149-8f1f-2c3686a18809-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7370c9a2-2978-4149-8f1f-2c3686a18809" (UID: "7370c9a2-2978-4149-8f1f-2c3686a18809"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.462274 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7370c9a2-2978-4149-8f1f-2c3686a18809-kube-api-access-z9j52" (OuterVolumeSpecName: "kube-api-access-z9j52") pod "7370c9a2-2978-4149-8f1f-2c3686a18809" (UID: "7370c9a2-2978-4149-8f1f-2c3686a18809"). InnerVolumeSpecName "kube-api-access-z9j52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.558732 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7370c9a2-2978-4149-8f1f-2c3686a18809-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.558787 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7370c9a2-2978-4149-8f1f-2c3686a18809-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.558808 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9j52\" (UniqueName: \"kubernetes.io/projected/7370c9a2-2978-4149-8f1f-2c3686a18809-kube-api-access-z9j52\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.948186 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" event={"ID":"7370c9a2-2978-4149-8f1f-2c3686a18809","Type":"ContainerDied","Data":"80cd502957099cb35bb72e89cba48705edf68362d02c1d28e48cacf34c0c3dbf"} Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.948229 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-qlbqf" Jan 30 09:30:03 crc kubenswrapper[4870]: I0130 09:30:03.948232 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80cd502957099cb35bb72e89cba48705edf68362d02c1d28e48cacf34c0c3dbf" Jan 30 09:30:04 crc kubenswrapper[4870]: I0130 09:30:04.425865 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj"] Jan 30 09:30:04 crc kubenswrapper[4870]: I0130 09:30:04.435567 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-wr5sj"] Jan 30 09:30:06 crc kubenswrapper[4870]: I0130 09:30:06.086015 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c91153-7a90-4c60-811f-915f8ccf0bdf" path="/var/lib/kubelet/pods/e9c91153-7a90-4c60-811f-915f8ccf0bdf/volumes" Jan 30 09:30:39 crc kubenswrapper[4870]: I0130 09:30:39.305015 4870 scope.go:117] "RemoveContainer" containerID="479ba30159faf1bd5abe17d0fd8bcbe0c86c787b6f5f69ef68ac1e6330cdb3a2" Jan 30 09:31:55 crc kubenswrapper[4870]: I0130 09:31:55.250157 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:31:55 crc kubenswrapper[4870]: I0130 09:31:55.250710 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:32:25 crc kubenswrapper[4870]: I0130 09:32:25.250342 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:32:25 crc kubenswrapper[4870]: I0130 09:32:25.250929 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.250247 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.250886 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.250950 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.251846 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.252027 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" gracePeriod=600 Jan 30 09:32:55 crc kubenswrapper[4870]: E0130 09:32:55.386545 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.580077 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" exitCode=0 Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.580134 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40"} Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.580213 4870 scope.go:117] "RemoveContainer" containerID="795d2451707be8d2499e0a626a854aa75de48e7e4bf87e79653cebe4102927af" Jan 30 09:32:55 crc kubenswrapper[4870]: I0130 09:32:55.581235 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:32:55 crc kubenswrapper[4870]: E0130 09:32:55.581594 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.020227 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wcf6s"] Jan 30 09:33:06 crc kubenswrapper[4870]: E0130 09:33:06.021301 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7370c9a2-2978-4149-8f1f-2c3686a18809" containerName="collect-profiles" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.021314 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7370c9a2-2978-4149-8f1f-2c3686a18809" containerName="collect-profiles" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.021501 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7370c9a2-2978-4149-8f1f-2c3686a18809" containerName="collect-profiles" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.022983 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.042757 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcf6s"] Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.170399 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-catalog-content\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.170761 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mbwz\" (UniqueName: \"kubernetes.io/projected/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-kube-api-access-7mbwz\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.170905 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-utilities\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.278131 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-catalog-content\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.278247 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mbwz\" (UniqueName: \"kubernetes.io/projected/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-kube-api-access-7mbwz\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.278353 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-utilities\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.279183 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-catalog-content\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.279328 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-utilities\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.311794 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mbwz\" (UniqueName: \"kubernetes.io/projected/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-kube-api-access-7mbwz\") pod \"redhat-marketplace-wcf6s\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.345588 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:06 crc kubenswrapper[4870]: I0130 09:33:06.848821 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcf6s"] Jan 30 09:33:07 crc kubenswrapper[4870]: I0130 09:33:07.074431 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:33:07 crc kubenswrapper[4870]: E0130 09:33:07.075355 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:33:07 crc kubenswrapper[4870]: I0130 09:33:07.701330 4870 generic.go:334] "Generic (PLEG): container finished" podID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerID="e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae" exitCode=0 Jan 30 09:33:07 crc kubenswrapper[4870]: I0130 09:33:07.701420 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerDied","Data":"e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae"} Jan 30 09:33:07 crc kubenswrapper[4870]: I0130 09:33:07.701611 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerStarted","Data":"3f23315f274408f80fb768e32c0b22194e01f4d4da149a314da0ca738103e0a6"} Jan 30 09:33:07 crc kubenswrapper[4870]: I0130 09:33:07.704336 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:33:08 crc kubenswrapper[4870]: I0130 09:33:08.711113 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerStarted","Data":"9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add"} Jan 30 09:33:09 crc kubenswrapper[4870]: I0130 09:33:09.723731 4870 generic.go:334] "Generic (PLEG): container finished" podID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerID="9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add" exitCode=0 Jan 30 09:33:09 crc kubenswrapper[4870]: I0130 09:33:09.724028 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerDied","Data":"9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add"} Jan 30 09:33:10 crc kubenswrapper[4870]: I0130 09:33:10.738281 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerStarted","Data":"ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d"} Jan 30 09:33:10 crc kubenswrapper[4870]: I0130 09:33:10.787108 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wcf6s" podStartSLOduration=3.374033601 podStartE2EDuration="5.787085388s" podCreationTimestamp="2026-01-30 09:33:05 +0000 UTC" firstStartedPulling="2026-01-30 09:33:07.70404575 +0000 UTC m=+5026.399592869" lastFinishedPulling="2026-01-30 09:33:10.117097547 +0000 UTC m=+5028.812644656" observedRunningTime="2026-01-30 09:33:10.776364503 +0000 UTC m=+5029.471911672" watchObservedRunningTime="2026-01-30 09:33:10.787085388 +0000 UTC m=+5029.482632497" Jan 30 09:33:16 crc kubenswrapper[4870]: I0130 09:33:16.346912 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:16 crc kubenswrapper[4870]: I0130 09:33:16.347313 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:16 crc kubenswrapper[4870]: I0130 09:33:16.399251 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:16 crc kubenswrapper[4870]: I0130 09:33:16.848892 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:16 crc kubenswrapper[4870]: I0130 09:33:16.909319 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcf6s"] Jan 30 09:33:18 crc kubenswrapper[4870]: I0130 09:33:18.811403 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wcf6s" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="registry-server" containerID="cri-o://ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d" gracePeriod=2 Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.074994 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:33:19 crc kubenswrapper[4870]: E0130 09:33:19.075464 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.367286 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.473517 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-catalog-content\") pod \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.473632 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mbwz\" (UniqueName: \"kubernetes.io/projected/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-kube-api-access-7mbwz\") pod \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.473762 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-utilities\") pod \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\" (UID: \"99d2fe89-b1ad-4202-81d1-6565aca3e0cf\") " Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.475130 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-utilities" (OuterVolumeSpecName: "utilities") pod "99d2fe89-b1ad-4202-81d1-6565aca3e0cf" (UID: "99d2fe89-b1ad-4202-81d1-6565aca3e0cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.499719 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-kube-api-access-7mbwz" (OuterVolumeSpecName: "kube-api-access-7mbwz") pod "99d2fe89-b1ad-4202-81d1-6565aca3e0cf" (UID: "99d2fe89-b1ad-4202-81d1-6565aca3e0cf"). InnerVolumeSpecName "kube-api-access-7mbwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.512244 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99d2fe89-b1ad-4202-81d1-6565aca3e0cf" (UID: "99d2fe89-b1ad-4202-81d1-6565aca3e0cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.576758 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.576799 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.576815 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mbwz\" (UniqueName: \"kubernetes.io/projected/99d2fe89-b1ad-4202-81d1-6565aca3e0cf-kube-api-access-7mbwz\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.847449 4870 generic.go:334] "Generic (PLEG): container finished" podID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerID="ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d" exitCode=0 Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.847518 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerDied","Data":"ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d"} Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.847557 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcf6s" event={"ID":"99d2fe89-b1ad-4202-81d1-6565aca3e0cf","Type":"ContainerDied","Data":"3f23315f274408f80fb768e32c0b22194e01f4d4da149a314da0ca738103e0a6"} Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.847596 4870 scope.go:117] "RemoveContainer" containerID="ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.847842 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcf6s" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.891564 4870 scope.go:117] "RemoveContainer" containerID="9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.905831 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcf6s"] Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.916405 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcf6s"] Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.924094 4870 scope.go:117] "RemoveContainer" containerID="e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.984127 4870 scope.go:117] "RemoveContainer" containerID="ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d" Jan 30 09:33:19 crc kubenswrapper[4870]: E0130 09:33:19.984552 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d\": container with ID starting with ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d not found: ID does not exist" containerID="ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.984587 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d"} err="failed to get container status \"ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d\": rpc error: code = NotFound desc = could not find container \"ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d\": container with ID starting with ba156291c9638d3baa958e1c97ef9cfd36f5d21f21cdca69c41e38e0a513809d not found: ID does not exist" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.984611 4870 scope.go:117] "RemoveContainer" containerID="9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add" Jan 30 09:33:19 crc kubenswrapper[4870]: E0130 09:33:19.984973 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add\": container with ID starting with 9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add not found: ID does not exist" containerID="9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.985075 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add"} err="failed to get container status \"9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add\": rpc error: code = NotFound desc = could not find container \"9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add\": container with ID starting with 9f570762efcb27f98a8655a98226de2578ec425e08bb59eec1f4c19543e74add not found: ID does not exist" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.985144 4870 scope.go:117] "RemoveContainer" containerID="e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae" Jan 30 09:33:19 crc kubenswrapper[4870]: E0130 09:33:19.985453 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae\": container with ID starting with e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae not found: ID does not exist" containerID="e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae" Jan 30 09:33:19 crc kubenswrapper[4870]: I0130 09:33:19.985478 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae"} err="failed to get container status \"e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae\": rpc error: code = NotFound desc = could not find container \"e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae\": container with ID starting with e560180dbb8a0d513ce90ea73f6871e8d968c0b8877f802a39a5bf86ec4eb0ae not found: ID does not exist" Jan 30 09:33:20 crc kubenswrapper[4870]: I0130 09:33:20.086519 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" path="/var/lib/kubelet/pods/99d2fe89-b1ad-4202-81d1-6565aca3e0cf/volumes" Jan 30 09:33:22 crc kubenswrapper[4870]: I0130 09:33:22.872993 4870 generic.go:334] "Generic (PLEG): container finished" podID="dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" containerID="e510327daa135710d56632aefcbd974a031585074a72c0b411cbaf1ee33eb7a9" exitCode=1 Jan 30 09:33:22 crc kubenswrapper[4870]: I0130 09:33:22.873037 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a","Type":"ContainerDied","Data":"e510327daa135710d56632aefcbd974a031585074a72c0b411cbaf1ee33eb7a9"} Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.224322 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.281805 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.281888 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-workdir\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.281918 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ca-certs\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.281939 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-temporary\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.281989 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-config-data\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.282021 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config-secret\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.282184 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.282285 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bggh\" (UniqueName: \"kubernetes.io/projected/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-kube-api-access-8bggh\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.282302 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ssh-key\") pod \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\" (UID: \"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a\") " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.284153 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-config-data" (OuterVolumeSpecName: "config-data") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.286280 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.289918 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.384475 4870 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.384515 4870 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.384529 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.799340 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.799841 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-kube-api-access-8bggh" (OuterVolumeSpecName: "kube-api-access-8bggh") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "kube-api-access-8bggh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.897434 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.897471 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bggh\" (UniqueName: \"kubernetes.io/projected/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-kube-api-access-8bggh\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.910807 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc531a0b-3bc8-45c0-935d-6425c9ee5e3a","Type":"ContainerDied","Data":"1c881927627a156ba1416d85da9f209f5ec355b05e5dce2ac4e41aa800f2573b"} Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.910859 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c881927627a156ba1416d85da9f209f5ec355b05e5dce2ac4e41aa800f2573b" Jan 30 09:33:24 crc kubenswrapper[4870]: I0130 09:33:24.910922 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.025378 4870 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.026682 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.039413 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.086048 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.097711 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" (UID: "dc531a0b-3bc8-45c0-935d-6425c9ee5e3a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.122478 4870 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.122642 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.122766 4870 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.122951 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc531a0b-3bc8-45c0-935d-6425c9ee5e3a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:25 crc kubenswrapper[4870]: I0130 09:33:25.123067 4870 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.757061 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:33:29 crc kubenswrapper[4870]: E0130 09:33:29.758167 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.758182 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:33:29 crc kubenswrapper[4870]: E0130 09:33:29.758203 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="extract-content" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.758210 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="extract-content" Jan 30 09:33:29 crc kubenswrapper[4870]: E0130 09:33:29.758232 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="extract-utilities" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.758240 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="extract-utilities" Jan 30 09:33:29 crc kubenswrapper[4870]: E0130 09:33:29.758255 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="registry-server" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.758263 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="registry-server" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.758487 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc531a0b-3bc8-45c0-935d-6425c9ee5e3a" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.758509 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d2fe89-b1ad-4202-81d1-6565aca3e0cf" containerName="registry-server" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.759505 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.765373 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-w7v26" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.772645 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.825849 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.826088 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6r5\" (UniqueName: \"kubernetes.io/projected/ca368ef3-843d-4326-a899-9f4a1f6466c3-kube-api-access-xs6r5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.928054 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs6r5\" (UniqueName: \"kubernetes.io/projected/ca368ef3-843d-4326-a899-9f4a1f6466c3-kube-api-access-xs6r5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.928152 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.928799 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.948957 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs6r5\" (UniqueName: \"kubernetes.io/projected/ca368ef3-843d-4326-a899-9f4a1f6466c3-kube-api-access-xs6r5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:29 crc kubenswrapper[4870]: I0130 09:33:29.968860 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca368ef3-843d-4326-a899-9f4a1f6466c3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:30 crc kubenswrapper[4870]: I0130 09:33:30.080585 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:33:30 crc kubenswrapper[4870]: I0130 09:33:30.537904 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:33:30 crc kubenswrapper[4870]: I0130 09:33:30.969162 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ca368ef3-843d-4326-a899-9f4a1f6466c3","Type":"ContainerStarted","Data":"85118fa6a8ee5d5813183f40be30b325e6a66007bd7394e610390adc3ed79761"} Jan 30 09:33:31 crc kubenswrapper[4870]: I0130 09:33:31.978721 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ca368ef3-843d-4326-a899-9f4a1f6466c3","Type":"ContainerStarted","Data":"4c979379ab0a5d4ef1cfcc71e90ff80e7d593cf8b7ab0c153fa21e4a25e4bd34"} Jan 30 09:33:31 crc kubenswrapper[4870]: I0130 09:33:31.994915 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.042209956 podStartE2EDuration="2.994893875s" podCreationTimestamp="2026-01-30 09:33:29 +0000 UTC" firstStartedPulling="2026-01-30 09:33:30.541348772 +0000 UTC m=+5049.236895891" lastFinishedPulling="2026-01-30 09:33:31.494032701 +0000 UTC m=+5050.189579810" observedRunningTime="2026-01-30 09:33:31.992919184 +0000 UTC m=+5050.688466293" watchObservedRunningTime="2026-01-30 09:33:31.994893875 +0000 UTC m=+5050.690440984" Jan 30 09:33:34 crc kubenswrapper[4870]: I0130 09:33:34.074657 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:33:34 crc kubenswrapper[4870]: E0130 09:33:34.075619 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:33:46 crc kubenswrapper[4870]: I0130 09:33:46.075322 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:33:46 crc kubenswrapper[4870]: E0130 09:33:46.076208 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:34:01 crc kubenswrapper[4870]: I0130 09:34:01.075181 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:34:01 crc kubenswrapper[4870]: E0130 09:34:01.076090 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:34:15 crc kubenswrapper[4870]: I0130 09:34:15.074605 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:34:15 crc kubenswrapper[4870]: E0130 09:34:15.075539 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.278199 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngvkt/must-gather-jl6kn"] Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.280560 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.282771 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ngvkt"/"kube-root-ca.crt" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.283066 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ngvkt"/"openshift-service-ca.crt" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.283737 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ngvkt"/"default-dockercfg-4wdwf" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.297955 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ngvkt/must-gather-jl6kn"] Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.316762 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6769b74f-20a7-48a8-b39b-d812418dbab4-must-gather-output\") pod \"must-gather-jl6kn\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.316832 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltpzc\" (UniqueName: \"kubernetes.io/projected/6769b74f-20a7-48a8-b39b-d812418dbab4-kube-api-access-ltpzc\") pod \"must-gather-jl6kn\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.418633 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltpzc\" (UniqueName: \"kubernetes.io/projected/6769b74f-20a7-48a8-b39b-d812418dbab4-kube-api-access-ltpzc\") pod \"must-gather-jl6kn\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.418908 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6769b74f-20a7-48a8-b39b-d812418dbab4-must-gather-output\") pod \"must-gather-jl6kn\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.419362 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6769b74f-20a7-48a8-b39b-d812418dbab4-must-gather-output\") pod \"must-gather-jl6kn\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.441799 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltpzc\" (UniqueName: \"kubernetes.io/projected/6769b74f-20a7-48a8-b39b-d812418dbab4-kube-api-access-ltpzc\") pod \"must-gather-jl6kn\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:22 crc kubenswrapper[4870]: I0130 09:34:22.605514 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:34:23 crc kubenswrapper[4870]: I0130 09:34:23.208164 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ngvkt/must-gather-jl6kn"] Jan 30 09:34:23 crc kubenswrapper[4870]: I0130 09:34:23.479870 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" event={"ID":"6769b74f-20a7-48a8-b39b-d812418dbab4","Type":"ContainerStarted","Data":"db30a1c0b10d50233163a592b08ff9db56770b71ea9dff19a11314adcb837750"} Jan 30 09:34:26 crc kubenswrapper[4870]: I0130 09:34:26.074708 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:34:26 crc kubenswrapper[4870]: E0130 09:34:26.075545 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:34:29 crc kubenswrapper[4870]: I0130 09:34:29.535799 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" event={"ID":"6769b74f-20a7-48a8-b39b-d812418dbab4","Type":"ContainerStarted","Data":"94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868"} Jan 30 09:34:29 crc kubenswrapper[4870]: I0130 09:34:29.536207 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" event={"ID":"6769b74f-20a7-48a8-b39b-d812418dbab4","Type":"ContainerStarted","Data":"6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879"} Jan 30 09:34:29 crc kubenswrapper[4870]: I0130 09:34:29.555304 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" podStartSLOduration=1.7722927689999999 podStartE2EDuration="7.555280891s" podCreationTimestamp="2026-01-30 09:34:22 +0000 UTC" firstStartedPulling="2026-01-30 09:34:23.225369438 +0000 UTC m=+5101.920916547" lastFinishedPulling="2026-01-30 09:34:29.00835756 +0000 UTC m=+5107.703904669" observedRunningTime="2026-01-30 09:34:29.549361187 +0000 UTC m=+5108.244908296" watchObservedRunningTime="2026-01-30 09:34:29.555280891 +0000 UTC m=+5108.250828000" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.229220 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-95d7g"] Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.230920 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.286343 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e8c90b48-a96e-4c40-aff8-ed26b5d74540-host\") pod \"crc-debug-95d7g\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.286458 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt22h\" (UniqueName: \"kubernetes.io/projected/e8c90b48-a96e-4c40-aff8-ed26b5d74540-kube-api-access-xt22h\") pod \"crc-debug-95d7g\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.388257 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e8c90b48-a96e-4c40-aff8-ed26b5d74540-host\") pod \"crc-debug-95d7g\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.388320 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e8c90b48-a96e-4c40-aff8-ed26b5d74540-host\") pod \"crc-debug-95d7g\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.388589 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt22h\" (UniqueName: \"kubernetes.io/projected/e8c90b48-a96e-4c40-aff8-ed26b5d74540-kube-api-access-xt22h\") pod \"crc-debug-95d7g\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.498829 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt22h\" (UniqueName: \"kubernetes.io/projected/e8c90b48-a96e-4c40-aff8-ed26b5d74540-kube-api-access-xt22h\") pod \"crc-debug-95d7g\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:34 crc kubenswrapper[4870]: I0130 09:34:34.553687 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:34:35 crc kubenswrapper[4870]: I0130 09:34:35.593038 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" event={"ID":"e8c90b48-a96e-4c40-aff8-ed26b5d74540","Type":"ContainerStarted","Data":"b44d4efd6de06f1f27935332895224e30f5802b5af4349b9627d9aac0d98adc1"} Jan 30 09:34:39 crc kubenswrapper[4870]: I0130 09:34:39.075508 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:34:39 crc kubenswrapper[4870]: E0130 09:34:39.076148 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:34:46 crc kubenswrapper[4870]: I0130 09:34:46.007235 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" event={"ID":"e8c90b48-a96e-4c40-aff8-ed26b5d74540","Type":"ContainerStarted","Data":"c10769db3e913e00355f8966e729b5b6b9071d652adf30ce568e54dc81b0dfbf"} Jan 30 09:34:46 crc kubenswrapper[4870]: I0130 09:34:46.025918 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" podStartSLOduration=1.5754330539999999 podStartE2EDuration="12.025896102s" podCreationTimestamp="2026-01-30 09:34:34 +0000 UTC" firstStartedPulling="2026-01-30 09:34:34.591397416 +0000 UTC m=+5113.286944525" lastFinishedPulling="2026-01-30 09:34:45.041860474 +0000 UTC m=+5123.737407573" observedRunningTime="2026-01-30 09:34:46.019394679 +0000 UTC m=+5124.714941788" watchObservedRunningTime="2026-01-30 09:34:46.025896102 +0000 UTC m=+5124.721443211" Jan 30 09:34:51 crc kubenswrapper[4870]: I0130 09:34:51.075777 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:34:51 crc kubenswrapper[4870]: E0130 09:34:51.078575 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:35:04 crc kubenswrapper[4870]: I0130 09:35:04.075273 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:35:04 crc kubenswrapper[4870]: E0130 09:35:04.076117 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:35:18 crc kubenswrapper[4870]: I0130 09:35:18.074726 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:35:18 crc kubenswrapper[4870]: E0130 09:35:18.075487 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:35:33 crc kubenswrapper[4870]: I0130 09:35:33.075497 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:35:33 crc kubenswrapper[4870]: E0130 09:35:33.078226 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:35:37 crc kubenswrapper[4870]: I0130 09:35:37.532633 4870 generic.go:334] "Generic (PLEG): container finished" podID="e8c90b48-a96e-4c40-aff8-ed26b5d74540" containerID="c10769db3e913e00355f8966e729b5b6b9071d652adf30ce568e54dc81b0dfbf" exitCode=0 Jan 30 09:35:37 crc kubenswrapper[4870]: I0130 09:35:37.532723 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" event={"ID":"e8c90b48-a96e-4c40-aff8-ed26b5d74540","Type":"ContainerDied","Data":"c10769db3e913e00355f8966e729b5b6b9071d652adf30ce568e54dc81b0dfbf"} Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.673459 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.715509 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-95d7g"] Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.723760 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-95d7g"] Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.825669 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt22h\" (UniqueName: \"kubernetes.io/projected/e8c90b48-a96e-4c40-aff8-ed26b5d74540-kube-api-access-xt22h\") pod \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.825778 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e8c90b48-a96e-4c40-aff8-ed26b5d74540-host\") pod \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\" (UID: \"e8c90b48-a96e-4c40-aff8-ed26b5d74540\") " Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.825943 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8c90b48-a96e-4c40-aff8-ed26b5d74540-host" (OuterVolumeSpecName: "host") pod "e8c90b48-a96e-4c40-aff8-ed26b5d74540" (UID: "e8c90b48-a96e-4c40-aff8-ed26b5d74540"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.826475 4870 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e8c90b48-a96e-4c40-aff8-ed26b5d74540-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.836922 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c90b48-a96e-4c40-aff8-ed26b5d74540-kube-api-access-xt22h" (OuterVolumeSpecName: "kube-api-access-xt22h") pod "e8c90b48-a96e-4c40-aff8-ed26b5d74540" (UID: "e8c90b48-a96e-4c40-aff8-ed26b5d74540"). InnerVolumeSpecName "kube-api-access-xt22h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:35:38 crc kubenswrapper[4870]: I0130 09:35:38.928548 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt22h\" (UniqueName: \"kubernetes.io/projected/e8c90b48-a96e-4c40-aff8-ed26b5d74540-kube-api-access-xt22h\") on node \"crc\" DevicePath \"\"" Jan 30 09:35:39 crc kubenswrapper[4870]: I0130 09:35:39.562638 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b44d4efd6de06f1f27935332895224e30f5802b5af4349b9627d9aac0d98adc1" Jan 30 09:35:39 crc kubenswrapper[4870]: I0130 09:35:39.562715 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-95d7g" Jan 30 09:35:39 crc kubenswrapper[4870]: I0130 09:35:39.908832 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-gqfg9"] Jan 30 09:35:39 crc kubenswrapper[4870]: E0130 09:35:39.909685 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c90b48-a96e-4c40-aff8-ed26b5d74540" containerName="container-00" Jan 30 09:35:39 crc kubenswrapper[4870]: I0130 09:35:39.909703 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c90b48-a96e-4c40-aff8-ed26b5d74540" containerName="container-00" Jan 30 09:35:39 crc kubenswrapper[4870]: I0130 09:35:39.909993 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c90b48-a96e-4c40-aff8-ed26b5d74540" containerName="container-00" Jan 30 09:35:39 crc kubenswrapper[4870]: I0130 09:35:39.910949 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.059028 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee69b9db-1ce7-4877-8e0a-f44a22b61917-host\") pod \"crc-debug-gqfg9\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.059178 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtc5p\" (UniqueName: \"kubernetes.io/projected/ee69b9db-1ce7-4877-8e0a-f44a22b61917-kube-api-access-gtc5p\") pod \"crc-debug-gqfg9\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.086210 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c90b48-a96e-4c40-aff8-ed26b5d74540" path="/var/lib/kubelet/pods/e8c90b48-a96e-4c40-aff8-ed26b5d74540/volumes" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.161544 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee69b9db-1ce7-4877-8e0a-f44a22b61917-host\") pod \"crc-debug-gqfg9\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.161681 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtc5p\" (UniqueName: \"kubernetes.io/projected/ee69b9db-1ce7-4877-8e0a-f44a22b61917-kube-api-access-gtc5p\") pod \"crc-debug-gqfg9\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.161685 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee69b9db-1ce7-4877-8e0a-f44a22b61917-host\") pod \"crc-debug-gqfg9\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.178079 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtc5p\" (UniqueName: \"kubernetes.io/projected/ee69b9db-1ce7-4877-8e0a-f44a22b61917-kube-api-access-gtc5p\") pod \"crc-debug-gqfg9\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.229634 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:40 crc kubenswrapper[4870]: I0130 09:35:40.572031 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" event={"ID":"ee69b9db-1ce7-4877-8e0a-f44a22b61917","Type":"ContainerStarted","Data":"0d15bc4f5f1cdd64af25a4f04a8ea84d36babcf03d847e6f57d65c94e43dffba"} Jan 30 09:35:41 crc kubenswrapper[4870]: I0130 09:35:41.583236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" event={"ID":"ee69b9db-1ce7-4877-8e0a-f44a22b61917","Type":"ContainerStarted","Data":"59efa965c2adfae979420af1ce17d266ff44eee37b7a015447ef2d517758282c"} Jan 30 09:35:41 crc kubenswrapper[4870]: I0130 09:35:41.598118 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" podStartSLOduration=2.598102275 podStartE2EDuration="2.598102275s" podCreationTimestamp="2026-01-30 09:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:35:41.594789883 +0000 UTC m=+5180.290336992" watchObservedRunningTime="2026-01-30 09:35:41.598102275 +0000 UTC m=+5180.293649384" Jan 30 09:35:42 crc kubenswrapper[4870]: I0130 09:35:42.596696 4870 generic.go:334] "Generic (PLEG): container finished" podID="ee69b9db-1ce7-4877-8e0a-f44a22b61917" containerID="59efa965c2adfae979420af1ce17d266ff44eee37b7a015447ef2d517758282c" exitCode=0 Jan 30 09:35:42 crc kubenswrapper[4870]: I0130 09:35:42.596751 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" event={"ID":"ee69b9db-1ce7-4877-8e0a-f44a22b61917","Type":"ContainerDied","Data":"59efa965c2adfae979420af1ce17d266ff44eee37b7a015447ef2d517758282c"} Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.775758 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.944885 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtc5p\" (UniqueName: \"kubernetes.io/projected/ee69b9db-1ce7-4877-8e0a-f44a22b61917-kube-api-access-gtc5p\") pod \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.944939 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee69b9db-1ce7-4877-8e0a-f44a22b61917-host\") pod \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\" (UID: \"ee69b9db-1ce7-4877-8e0a-f44a22b61917\") " Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.945152 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee69b9db-1ce7-4877-8e0a-f44a22b61917-host" (OuterVolumeSpecName: "host") pod "ee69b9db-1ce7-4877-8e0a-f44a22b61917" (UID: "ee69b9db-1ce7-4877-8e0a-f44a22b61917"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.945675 4870 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee69b9db-1ce7-4877-8e0a-f44a22b61917-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.956705 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee69b9db-1ce7-4877-8e0a-f44a22b61917-kube-api-access-gtc5p" (OuterVolumeSpecName: "kube-api-access-gtc5p") pod "ee69b9db-1ce7-4877-8e0a-f44a22b61917" (UID: "ee69b9db-1ce7-4877-8e0a-f44a22b61917"). InnerVolumeSpecName "kube-api-access-gtc5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.985845 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-gqfg9"] Jan 30 09:35:43 crc kubenswrapper[4870]: I0130 09:35:43.999689 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-gqfg9"] Jan 30 09:35:44 crc kubenswrapper[4870]: I0130 09:35:44.048028 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtc5p\" (UniqueName: \"kubernetes.io/projected/ee69b9db-1ce7-4877-8e0a-f44a22b61917-kube-api-access-gtc5p\") on node \"crc\" DevicePath \"\"" Jan 30 09:35:44 crc kubenswrapper[4870]: I0130 09:35:44.085815 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee69b9db-1ce7-4877-8e0a-f44a22b61917" path="/var/lib/kubelet/pods/ee69b9db-1ce7-4877-8e0a-f44a22b61917/volumes" Jan 30 09:35:44 crc kubenswrapper[4870]: I0130 09:35:44.664683 4870 scope.go:117] "RemoveContainer" containerID="59efa965c2adfae979420af1ce17d266ff44eee37b7a015447ef2d517758282c" Jan 30 09:35:44 crc kubenswrapper[4870]: I0130 09:35:44.664738 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-gqfg9" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.198643 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-g52p8"] Jan 30 09:35:45 crc kubenswrapper[4870]: E0130 09:35:45.200410 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee69b9db-1ce7-4877-8e0a-f44a22b61917" containerName="container-00" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.200437 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee69b9db-1ce7-4877-8e0a-f44a22b61917" containerName="container-00" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.201035 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee69b9db-1ce7-4877-8e0a-f44a22b61917" containerName="container-00" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.202434 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.383118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/384e367c-c2a4-4dbf-bb60-a903590c8ead-host\") pod \"crc-debug-g52p8\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.383323 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nxq\" (UniqueName: \"kubernetes.io/projected/384e367c-c2a4-4dbf-bb60-a903590c8ead-kube-api-access-56nxq\") pod \"crc-debug-g52p8\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.484930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56nxq\" (UniqueName: \"kubernetes.io/projected/384e367c-c2a4-4dbf-bb60-a903590c8ead-kube-api-access-56nxq\") pod \"crc-debug-g52p8\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.485127 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/384e367c-c2a4-4dbf-bb60-a903590c8ead-host\") pod \"crc-debug-g52p8\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.485321 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/384e367c-c2a4-4dbf-bb60-a903590c8ead-host\") pod \"crc-debug-g52p8\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.507698 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56nxq\" (UniqueName: \"kubernetes.io/projected/384e367c-c2a4-4dbf-bb60-a903590c8ead-kube-api-access-56nxq\") pod \"crc-debug-g52p8\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.528032 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:45 crc kubenswrapper[4870]: I0130 09:35:45.683218 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-g52p8" event={"ID":"384e367c-c2a4-4dbf-bb60-a903590c8ead","Type":"ContainerStarted","Data":"a95981cbb1ed15aeacc2b4b511f205c4253ce1a27ef8212031933dad38699908"} Jan 30 09:35:46 crc kubenswrapper[4870]: I0130 09:35:46.694388 4870 generic.go:334] "Generic (PLEG): container finished" podID="384e367c-c2a4-4dbf-bb60-a903590c8ead" containerID="5c29178d54a5c60a34db9756b29000aea4dcd5f164a6b57720fc7f6a4eda55cd" exitCode=0 Jan 30 09:35:46 crc kubenswrapper[4870]: I0130 09:35:46.694485 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/crc-debug-g52p8" event={"ID":"384e367c-c2a4-4dbf-bb60-a903590c8ead","Type":"ContainerDied","Data":"5c29178d54a5c60a34db9756b29000aea4dcd5f164a6b57720fc7f6a4eda55cd"} Jan 30 09:35:46 crc kubenswrapper[4870]: I0130 09:35:46.728810 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-g52p8"] Jan 30 09:35:46 crc kubenswrapper[4870]: I0130 09:35:46.738351 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngvkt/crc-debug-g52p8"] Jan 30 09:35:47 crc kubenswrapper[4870]: I0130 09:35:47.829177 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:35:47 crc kubenswrapper[4870]: I0130 09:35:47.940857 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/384e367c-c2a4-4dbf-bb60-a903590c8ead-host\") pod \"384e367c-c2a4-4dbf-bb60-a903590c8ead\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " Jan 30 09:35:47 crc kubenswrapper[4870]: I0130 09:35:47.941005 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56nxq\" (UniqueName: \"kubernetes.io/projected/384e367c-c2a4-4dbf-bb60-a903590c8ead-kube-api-access-56nxq\") pod \"384e367c-c2a4-4dbf-bb60-a903590c8ead\" (UID: \"384e367c-c2a4-4dbf-bb60-a903590c8ead\") " Jan 30 09:35:47 crc kubenswrapper[4870]: I0130 09:35:47.941003 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/384e367c-c2a4-4dbf-bb60-a903590c8ead-host" (OuterVolumeSpecName: "host") pod "384e367c-c2a4-4dbf-bb60-a903590c8ead" (UID: "384e367c-c2a4-4dbf-bb60-a903590c8ead"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:35:47 crc kubenswrapper[4870]: I0130 09:35:47.941546 4870 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/384e367c-c2a4-4dbf-bb60-a903590c8ead-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:35:47 crc kubenswrapper[4870]: I0130 09:35:47.948221 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384e367c-c2a4-4dbf-bb60-a903590c8ead-kube-api-access-56nxq" (OuterVolumeSpecName: "kube-api-access-56nxq") pod "384e367c-c2a4-4dbf-bb60-a903590c8ead" (UID: "384e367c-c2a4-4dbf-bb60-a903590c8ead"). InnerVolumeSpecName "kube-api-access-56nxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:35:48 crc kubenswrapper[4870]: I0130 09:35:48.043423 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56nxq\" (UniqueName: \"kubernetes.io/projected/384e367c-c2a4-4dbf-bb60-a903590c8ead-kube-api-access-56nxq\") on node \"crc\" DevicePath \"\"" Jan 30 09:35:48 crc kubenswrapper[4870]: I0130 09:35:48.075212 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:35:48 crc kubenswrapper[4870]: E0130 09:35:48.075475 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:35:48 crc kubenswrapper[4870]: I0130 09:35:48.085331 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384e367c-c2a4-4dbf-bb60-a903590c8ead" path="/var/lib/kubelet/pods/384e367c-c2a4-4dbf-bb60-a903590c8ead/volumes" Jan 30 09:35:48 crc kubenswrapper[4870]: I0130 09:35:48.716340 4870 scope.go:117] "RemoveContainer" containerID="5c29178d54a5c60a34db9756b29000aea4dcd5f164a6b57720fc7f6a4eda55cd" Jan 30 09:35:48 crc kubenswrapper[4870]: I0130 09:35:48.716360 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/crc-debug-g52p8" Jan 30 09:36:02 crc kubenswrapper[4870]: I0130 09:36:02.083525 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:36:02 crc kubenswrapper[4870]: E0130 09:36:02.084218 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:36:15 crc kubenswrapper[4870]: I0130 09:36:15.075288 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:36:15 crc kubenswrapper[4870]: E0130 09:36:15.076800 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:36:22 crc kubenswrapper[4870]: I0130 09:36:22.350413 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5564cc7ccb-wnwrs_304a486b-b7cf-4418-82c9-7795b2331284/barbican-api/0.log" Jan 30 09:36:22 crc kubenswrapper[4870]: I0130 09:36:22.525438 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5564cc7ccb-wnwrs_304a486b-b7cf-4418-82c9-7795b2331284/barbican-api-log/0.log" Jan 30 09:36:22 crc kubenswrapper[4870]: I0130 09:36:22.543096 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54fb8bddb6-w78xn_8a32795f-6328-4d51-a69a-60be965b17f0/barbican-keystone-listener/0.log" Jan 30 09:36:22 crc kubenswrapper[4870]: I0130 09:36:22.656680 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54fb8bddb6-w78xn_8a32795f-6328-4d51-a69a-60be965b17f0/barbican-keystone-listener-log/0.log" Jan 30 09:36:22 crc kubenswrapper[4870]: I0130 09:36:22.768903 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b94ff658f-bmntr_a3bc44ff-bc04-4e44-bb13-ff62f43057f5/barbican-worker/0.log" Jan 30 09:36:22 crc kubenswrapper[4870]: I0130 09:36:22.865149 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b94ff658f-bmntr_a3bc44ff-bc04-4e44-bb13-ff62f43057f5/barbican-worker-log/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.003979 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v6v4c_620aba2c-f389-4fc9-a27c-28c937894f7d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.389027 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0944a474-a4a5-4ff7-95cf-cd783c051a16/ceilometer-notification-agent/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.406918 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0944a474-a4a5-4ff7-95cf-cd783c051a16/ceilometer-central-agent/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.417835 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0944a474-a4a5-4ff7-95cf-cd783c051a16/proxy-httpd/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.469318 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0944a474-a4a5-4ff7-95cf-cd783c051a16/sg-core/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.634469 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_dcb916a9-c812-4e35-91d2-a4cc4ef78fc3/cinder-api-log/0.log" Jan 30 09:36:23 crc kubenswrapper[4870]: I0130 09:36:23.992247 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_cecf4070-2dd9-496d-bf4d-7f456eb6ed72/probe/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.033084 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_dcb916a9-c812-4e35-91d2-a4cc4ef78fc3/cinder-api/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.185058 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_cecf4070-2dd9-496d-bf4d-7f456eb6ed72/cinder-backup/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.223652 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e7a1bbc0-d212-4a83-bea0-d40c261ddb18/cinder-scheduler/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.358601 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e7a1bbc0-d212-4a83-bea0-d40c261ddb18/probe/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.550971 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_06465a52-3f34-45fd-b95e-e679adcb59e6/probe/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.551251 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_06465a52-3f34-45fd-b95e-e679adcb59e6/cinder-volume/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.771005 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_56215e10-017e-4662-92ab-8f25178c0fab/cinder-volume/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.811256 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_56215e10-017e-4662-92ab-8f25178c0fab/probe/0.log" Jan 30 09:36:24 crc kubenswrapper[4870]: I0130 09:36:24.938772 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-948rh_1eea19c9-87be-4160-8c11-c7ecd13cf088/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.143927 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-bk2j6_3f90c906-9b1e-4df6-8b94-367ae01963b7/init/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.233086 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-b4t7n_f32f4b01-631a-4f4b-8ffb-f0873b819de0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.417504 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-bk2j6_3f90c906-9b1e-4df6-8b94-367ae01963b7/init/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.480393 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-p67q7_9bef3cd3-94ab-486e-91de-c0ede57769d8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.556802 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66968b76ff-bk2j6_3f90c906-9b1e-4df6-8b94-367ae01963b7/dnsmasq-dns/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.713693 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_743b8276-eb2e-49fa-b493-fb83f20837ed/glance-httpd/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.743075 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_743b8276-eb2e-49fa-b493-fb83f20837ed/glance-log/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.906384 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2efb8d24-a358-43df-af27-d74c4cf88e1f/glance-httpd/0.log" Jan 30 09:36:25 crc kubenswrapper[4870]: I0130 09:36:25.923093 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2efb8d24-a358-43df-af27-d74c4cf88e1f/glance-log/0.log" Jan 30 09:36:26 crc kubenswrapper[4870]: I0130 09:36:26.257169 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-769d7654db-gw44c_b6c9337c-50ce-4c5c-a84f-8092d25fa1e2/horizon/0.log" Jan 30 09:36:26 crc kubenswrapper[4870]: I0130 09:36:26.284466 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8t9nx_51d5d5e3-867b-4ec9-9fca-07038b83ba29/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:26 crc kubenswrapper[4870]: I0130 09:36:26.523238 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-z4hkm_82fb960a-335c-4d35-baed-122cd1cb515d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:26 crc kubenswrapper[4870]: I0130 09:36:26.700451 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-769d7654db-gw44c_b6c9337c-50ce-4c5c-a84f-8092d25fa1e2/horizon-log/0.log" Jan 30 09:36:26 crc kubenswrapper[4870]: I0130 09:36:26.748958 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496061-tjh7b_43a9af69-f9ef-444e-8505-ccf1eac1a036/keystone-cron/0.log" Jan 30 09:36:26 crc kubenswrapper[4870]: I0130 09:36:26.968959 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0deb54ca-48c2-4b35-88c0-dbad5e8b9272/kube-state-metrics/0.log" Jan 30 09:36:27 crc kubenswrapper[4870]: I0130 09:36:27.025195 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-55b585f57f-9h2lg_cb9f4cfa-0698-47dd-9319-47b185d2f937/keystone-api/0.log" Jan 30 09:36:27 crc kubenswrapper[4870]: I0130 09:36:27.129832 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-26lfr_9e214e41-a575-467c-a053-d6807c4f1512/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:27 crc kubenswrapper[4870]: I0130 09:36:27.791666 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d69bf9957-gj6dt_a50dec5c-d013-42b7-8a60-c405d5c93362/neutron-api/0.log" Jan 30 09:36:27 crc kubenswrapper[4870]: I0130 09:36:27.810090 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d69bf9957-gj6dt_a50dec5c-d013-42b7-8a60-c405d5c93362/neutron-httpd/0.log" Jan 30 09:36:27 crc kubenswrapper[4870]: I0130 09:36:27.845964 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5hlg8_bbcba502-7991-4f7b-bdbd-b112cec436b9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:28 crc kubenswrapper[4870]: I0130 09:36:28.080285 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:36:28 crc kubenswrapper[4870]: E0130 09:36:28.080521 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:36:28 crc kubenswrapper[4870]: I0130 09:36:28.490163 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9834ddd4-269a-463c-953c-1bf07a7ffdf0/nova-cell0-conductor-conductor/0.log" Jan 30 09:36:28 crc kubenswrapper[4870]: I0130 09:36:28.901658 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e5686258-ed50-49a1-920b-77e9bbe01c55/nova-cell1-conductor-conductor/0.log" Jan 30 09:36:29 crc kubenswrapper[4870]: I0130 09:36:29.092957 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f6319a2a-594b-4da1-be42-ad0918221515/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 09:36:29 crc kubenswrapper[4870]: I0130 09:36:29.266018 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ed40aa22-a330-46ab-9971-39e764e63ff7/nova-api-log/0.log" Jan 30 09:36:29 crc kubenswrapper[4870]: I0130 09:36:29.402375 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-4b7pb_da926ccc-5787-4741-a00c-1163494adb5e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:29 crc kubenswrapper[4870]: I0130 09:36:29.558057 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ccdea203-220a-457e-b00f-61b48afc7329/nova-metadata-log/0.log" Jan 30 09:36:29 crc kubenswrapper[4870]: I0130 09:36:29.641551 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ed40aa22-a330-46ab-9971-39e764e63ff7/nova-api-api/0.log" Jan 30 09:36:30 crc kubenswrapper[4870]: I0130 09:36:30.553276 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_31607550-5ccc-4b0b-9fbd-18007a61dcff/mysql-bootstrap/0.log" Jan 30 09:36:30 crc kubenswrapper[4870]: I0130 09:36:30.556373 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ecf0d9a5-3778-453d-ad3a-0d28eb2e71a6/nova-scheduler-scheduler/0.log" Jan 30 09:36:30 crc kubenswrapper[4870]: I0130 09:36:30.836143 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_31607550-5ccc-4b0b-9fbd-18007a61dcff/galera/0.log" Jan 30 09:36:30 crc kubenswrapper[4870]: I0130 09:36:30.845795 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_31607550-5ccc-4b0b-9fbd-18007a61dcff/mysql-bootstrap/0.log" Jan 30 09:36:31 crc kubenswrapper[4870]: I0130 09:36:31.049056 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a/mysql-bootstrap/0.log" Jan 30 09:36:31 crc kubenswrapper[4870]: I0130 09:36:31.276602 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a/galera/0.log" Jan 30 09:36:31 crc kubenswrapper[4870]: I0130 09:36:31.303698 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ce45bb8-e721-40bb-a9fb-ac0d6b0deb4a/mysql-bootstrap/0.log" Jan 30 09:36:31 crc kubenswrapper[4870]: I0130 09:36:31.449844 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_204a0d39-f7b0-4468-a82f-9fcc49fc1281/openstackclient/0.log" Jan 30 09:36:31 crc kubenswrapper[4870]: I0130 09:36:31.540728 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-56vf8_eaa9048d-8c54-4054-87d1-69c6746c1479/openstack-network-exporter/0.log" Jan 30 09:36:32 crc kubenswrapper[4870]: I0130 09:36:32.395822 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ccdea203-220a-457e-b00f-61b48afc7329/nova-metadata-metadata/0.log" Jan 30 09:36:32 crc kubenswrapper[4870]: I0130 09:36:32.630749 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gznh8_b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2/ovsdb-server-init/0.log" Jan 30 09:36:32 crc kubenswrapper[4870]: I0130 09:36:32.840637 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gznh8_b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2/ovsdb-server-init/0.log" Jan 30 09:36:32 crc kubenswrapper[4870]: I0130 09:36:32.873862 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gznh8_b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2/ovsdb-server/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.121262 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rwchz_496b707b-8de6-4228-b4fd-a48f3709586c/ovn-controller/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.266206 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gznh8_b3231fb5-8edc-4a0d-a0f2-bf9aeb8e4fe2/ovs-vswitchd/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.464400 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8z72z_11f380d9-7c41-4b65-a46d-01c14ac81c07/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.525787 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d69aef12-ac48-41f7-8a14-a561edab0ae7/openstack-network-exporter/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.529407 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d69aef12-ac48-41f7-8a14-a561edab0ae7/ovn-northd/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.710279 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e9a5fd23-1240-4284-91cf-b57f4b2e3d02/openstack-network-exporter/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.860498 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e9a5fd23-1240-4284-91cf-b57f4b2e3d02/ovsdbserver-nb/0.log" Jan 30 09:36:33 crc kubenswrapper[4870]: I0130 09:36:33.983140 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_625f2d84-6699-4e9f-881e-e96509760e9d/openstack-network-exporter/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.085177 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_625f2d84-6699-4e9f-881e-e96509760e9d/ovsdbserver-sb/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.406048 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56cfc8cc98-pfz9w_a0bafb1e-cef8-4a8c-bb78-a5d11d098691/placement-api/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.422687 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1a4d5397-32f0-4cc0-919b-cf4ed004b797/init-config-reloader/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.463437 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56cfc8cc98-pfz9w_a0bafb1e-cef8-4a8c-bb78-a5d11d098691/placement-log/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.644795 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1a4d5397-32f0-4cc0-919b-cf4ed004b797/config-reloader/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.717444 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1a4d5397-32f0-4cc0-919b-cf4ed004b797/init-config-reloader/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.776613 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1a4d5397-32f0-4cc0-919b-cf4ed004b797/prometheus/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.855080 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1a4d5397-32f0-4cc0-919b-cf4ed004b797/thanos-sidecar/0.log" Jan 30 09:36:34 crc kubenswrapper[4870]: I0130 09:36:34.969914 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2575ea2c-dc22-4ca2-bf0b-d67eaa330832/setup-container/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.252033 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2575ea2c-dc22-4ca2-bf0b-d67eaa330832/setup-container/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.291218 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_2ab884a9-b47a-476a-8f89-140093b96527/setup-container/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.368081 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2575ea2c-dc22-4ca2-bf0b-d67eaa330832/rabbitmq/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.579093 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_2ab884a9-b47a-476a-8f89-140093b96527/setup-container/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.634521 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_2ab884a9-b47a-476a-8f89-140093b96527/rabbitmq/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.674030 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf05f72e-aa42-4296-a7dc-8b742d6e0aab/setup-container/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.901640 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf05f72e-aa42-4296-a7dc-8b742d6e0aab/setup-container/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.976845 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf05f72e-aa42-4296-a7dc-8b742d6e0aab/rabbitmq/0.log" Jan 30 09:36:35 crc kubenswrapper[4870]: I0130 09:36:35.985535 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2ns29_7c9e0c7d-dc65-4862-99da-326bc8d45bfd/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:36 crc kubenswrapper[4870]: I0130 09:36:36.261750 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zdqzm_68089c9f-f566-4e65-b2ea-dd65a4d9012c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:36 crc kubenswrapper[4870]: I0130 09:36:36.269185 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fpd48_c22cad0f-b909-42fa-95c5-2536e1105161/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:36 crc kubenswrapper[4870]: I0130 09:36:36.811144 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zl6bw_a685318c-e23f-4192-8ab4-7dbf24880b0d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:36 crc kubenswrapper[4870]: I0130 09:36:36.884185 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j8w7g_07db545c-df21-4f19-ad37-3071248b8672/ssh-known-hosts-edpm-deployment/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.189655 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-847c478677-wtndf_c01b58ab-bb54-448b-83de-f70f08378751/proxy-server/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.268824 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-847c478677-wtndf_c01b58ab-bb54-448b-83de-f70f08378751/proxy-httpd/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.411539 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gkrl7_4406e732-41a8-48a1-954a-6dbe4483a79a/swift-ring-rebalance/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.508243 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/account-auditor/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.555908 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/account-reaper/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.734976 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/account-replicator/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.799180 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/container-auditor/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.816853 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/account-server/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.865237 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/container-replicator/0.log" Jan 30 09:36:37 crc kubenswrapper[4870]: I0130 09:36:37.996860 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/container-server/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.019488 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/container-updater/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.105407 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/object-auditor/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.113868 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/object-expirer/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.246620 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/object-replicator/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.283451 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/object-server/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.348927 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/rsync/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.352026 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/object-updater/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.554763 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_46634e41-7d5b-4181-b824-716bb37fca47/swift-recon-cron/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.688988 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pxbfz_1e93cbad-07e7-4073-a577-b666a6901a1d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:38 crc kubenswrapper[4870]: I0130 09:36:38.995848 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ca368ef3-843d-4326-a899-9f4a1f6466c3/test-operator-logs-container/0.log" Jan 30 09:36:39 crc kubenswrapper[4870]: I0130 09:36:39.074747 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:36:39 crc kubenswrapper[4870]: E0130 09:36:39.075088 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:36:39 crc kubenswrapper[4870]: I0130 09:36:39.246386 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ntrfh_2f708fca-b1a9-432a-acbe-df74341208d2/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:36:39 crc kubenswrapper[4870]: I0130 09:36:39.252176 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_dc531a0b-3bc8-45c0-935d-6425c9ee5e3a/tempest-tests-tempest-tests-runner/0.log" Jan 30 09:36:39 crc kubenswrapper[4870]: I0130 09:36:39.985139 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_4061e0b3-e3ae-4ef0-a979-6028df77da5c/watcher-applier/0.log" Jan 30 09:36:40 crc kubenswrapper[4870]: I0130 09:36:40.681774 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_964cd6aa-bebd-412e-bd1c-001d151a90e8/watcher-api-log/0.log" Jan 30 09:36:43 crc kubenswrapper[4870]: I0130 09:36:43.865382 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_83b9fe73-9106-4f9b-9272-6f12e3fb8177/watcher-decision-engine/0.log" Jan 30 09:36:44 crc kubenswrapper[4870]: I0130 09:36:44.356368 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_964cd6aa-bebd-412e-bd1c-001d151a90e8/watcher-api/0.log" Jan 30 09:36:46 crc kubenswrapper[4870]: I0130 09:36:46.166432 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d691b652-0077-4709-9e9d-16b87c8d3d3c/memcached/0.log" Jan 30 09:36:54 crc kubenswrapper[4870]: I0130 09:36:54.075045 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:36:54 crc kubenswrapper[4870]: E0130 09:36:54.076634 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:37:05 crc kubenswrapper[4870]: I0130 09:37:05.075181 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:37:05 crc kubenswrapper[4870]: E0130 09:37:05.076531 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.410916 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/util/0.log" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.594753 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/util/0.log" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.607646 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/pull/0.log" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.609870 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/pull/0.log" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.809248 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/util/0.log" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.810377 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/pull/0.log" Jan 30 09:37:12 crc kubenswrapper[4870]: I0130 09:37:12.830403 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_538103733790cb5f656298b9ceab00deaa6551dd449a0a275a081caf7atkgmw_6f3c7406-d095-434b-a79a-f24373a9b141/extract/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.100355 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-wfpg9_54c01287-d66d-46bc-bbb8-7532263099c5/manager/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.108392 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-hsfpq_e973c5f3-3291-4d4b-85ce-806ef6f83c1a/manager/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.313034 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-grbz8_dfd97388-6e7f-4f4a-9e71-7a32a28b1dcb/manager/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.401791 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-tkrpg_96be73fb-f1fc-4c5c-a643-7b9dcc832ac6/manager/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.566908 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-wlkxq_b9449ead-e087-4895-a88a-8bdfe0835ebd/manager/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.580225 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-hbmf7_925313c0-6800-4a27-814b-887b46cf49ad/manager/0.log" Jan 30 09:37:13 crc kubenswrapper[4870]: I0130 09:37:13.842389 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-5vfrj_5680ceb3-f5ec-4d9e-a313-13564402bff2/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.105517 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-spzcf_46e6ee9b-0d57-4f76-8cf6-e8cea47b5f05/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.143129 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-rhfst_db7aeba5-92f5-4887-9a6a-92d8c57650d2/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.146478 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-j9bdn_5cde6cc5-f427-4349-8c8a-3dce0deac5a9/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.367081 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-59rt2_ea3efedd-cb74-48c7-b246-b188bac37ed4/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.427523 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-2xdfh_0ea209e2-96bf-4919-ad8f-f86de2b78ab1/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.648748 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-cpn6f_604ff246-0f47-4c2c-8940-d76f10dce14e/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.668037 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-4sftq_2ee622d2-acd4-4eec-9fbb-12b5bae7e32f/manager/0.log" Jan 30 09:37:14 crc kubenswrapper[4870]: I0130 09:37:14.832039 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dwj9r8_be7a26e3-9284-4316-bce7-7bc15c9178bd/manager/0.log" Jan 30 09:37:15 crc kubenswrapper[4870]: I0130 09:37:15.065228 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-594f7f44c-vnpnd_b5c8b38a-bdec-4120-9802-5a35815eca01/operator/0.log" Jan 30 09:37:15 crc kubenswrapper[4870]: I0130 09:37:15.321258 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4bccf_c79c7300-5362-40dc-a952-2193e7a6908b/registry-server/0.log" Jan 30 09:37:15 crc kubenswrapper[4870]: I0130 09:37:15.577217 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-t4hbm_ec9257db-1c02-4160-9c89-7df62f2ce602/manager/0.log" Jan 30 09:37:15 crc kubenswrapper[4870]: I0130 09:37:15.598484 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-mx5xp_274d3a56-3caf-4dd2-b122-e3b45a3eec6e/manager/0.log" Jan 30 09:37:15 crc kubenswrapper[4870]: I0130 09:37:15.940213 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sds6v_b706cc39-6af6-4a91-b2a2-6160148dadae/operator/0.log" Jan 30 09:37:16 crc kubenswrapper[4870]: I0130 09:37:16.074337 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:37:16 crc kubenswrapper[4870]: E0130 09:37:16.074783 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:37:16 crc kubenswrapper[4870]: I0130 09:37:16.109632 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-497sn_2de7363a-3627-42bb-a58f-7bad2e414192/manager/0.log" Jan 30 09:37:16 crc kubenswrapper[4870]: I0130 09:37:16.349315 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-t8ncr_378c24d4-b8c1-4cd2-a85c-8449aa00ad3e/manager/0.log" Jan 30 09:37:16 crc kubenswrapper[4870]: I0130 09:37:16.579608 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-bmzrd_0319ce7f-95ab-4abf-9101-bf436cc74bf4/manager/0.log" Jan 30 09:37:16 crc kubenswrapper[4870]: I0130 09:37:16.679201 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7b7dd57594-2p68v_d6956410-92c0-40bf-b1c1-a3353ccf1bbc/manager/0.log" Jan 30 09:37:16 crc kubenswrapper[4870]: I0130 09:37:16.743853 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65544cf747-sgxjd_fcdb20a3-7229-48e6-8f12-d1b6a5c892f3/manager/0.log" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.817572 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gwxbl"] Jan 30 09:37:26 crc kubenswrapper[4870]: E0130 09:37:26.829895 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e367c-c2a4-4dbf-bb60-a903590c8ead" containerName="container-00" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.829921 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e367c-c2a4-4dbf-bb60-a903590c8ead" containerName="container-00" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.830267 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="384e367c-c2a4-4dbf-bb60-a903590c8ead" containerName="container-00" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.832563 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.832658 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gwxbl"] Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.925948 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-utilities\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.926025 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4ws\" (UniqueName: \"kubernetes.io/projected/b6135191-d11b-46b6-9eaf-08a0ffb73387-kube-api-access-xv4ws\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:26 crc kubenswrapper[4870]: I0130 09:37:26.926174 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-catalog-content\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.028671 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-utilities\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.028801 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4ws\" (UniqueName: \"kubernetes.io/projected/b6135191-d11b-46b6-9eaf-08a0ffb73387-kube-api-access-xv4ws\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.028937 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-catalog-content\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.029181 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-utilities\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.029474 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-catalog-content\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.066016 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4ws\" (UniqueName: \"kubernetes.io/projected/b6135191-d11b-46b6-9eaf-08a0ffb73387-kube-api-access-xv4ws\") pod \"redhat-operators-gwxbl\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.155505 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:27 crc kubenswrapper[4870]: W0130 09:37:27.785437 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6135191_d11b_46b6_9eaf_08a0ffb73387.slice/crio-431c4158a94ca496e82d431c484f0b8025d2bfa0356402d185b46c4ba06c8cf3 WatchSource:0}: Error finding container 431c4158a94ca496e82d431c484f0b8025d2bfa0356402d185b46c4ba06c8cf3: Status 404 returned error can't find the container with id 431c4158a94ca496e82d431c484f0b8025d2bfa0356402d185b46c4ba06c8cf3 Jan 30 09:37:27 crc kubenswrapper[4870]: I0130 09:37:27.787997 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gwxbl"] Jan 30 09:37:28 crc kubenswrapper[4870]: I0130 09:37:28.644017 4870 generic.go:334] "Generic (PLEG): container finished" podID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerID="a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b" exitCode=0 Jan 30 09:37:28 crc kubenswrapper[4870]: I0130 09:37:28.644057 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerDied","Data":"a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b"} Jan 30 09:37:28 crc kubenswrapper[4870]: I0130 09:37:28.644346 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerStarted","Data":"431c4158a94ca496e82d431c484f0b8025d2bfa0356402d185b46c4ba06c8cf3"} Jan 30 09:37:29 crc kubenswrapper[4870]: I0130 09:37:29.657294 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerStarted","Data":"c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107"} Jan 30 09:37:31 crc kubenswrapper[4870]: I0130 09:37:31.074975 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:37:31 crc kubenswrapper[4870]: E0130 09:37:31.075671 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:37:37 crc kubenswrapper[4870]: I0130 09:37:37.473979 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vzzk7_6f6b1608-a0e3-4b68-ae9b-f8cfeed9cd72/control-plane-machine-set-operator/0.log" Jan 30 09:37:37 crc kubenswrapper[4870]: I0130 09:37:37.474312 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jr94b_042ed63b-a1a9-4072-ae87-71b9fb98280c/kube-rbac-proxy/0.log" Jan 30 09:37:37 crc kubenswrapper[4870]: I0130 09:37:37.569864 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jr94b_042ed63b-a1a9-4072-ae87-71b9fb98280c/machine-api-operator/0.log" Jan 30 09:37:39 crc kubenswrapper[4870]: I0130 09:37:39.775040 4870 generic.go:334] "Generic (PLEG): container finished" podID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerID="c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107" exitCode=0 Jan 30 09:37:39 crc kubenswrapper[4870]: I0130 09:37:39.775129 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerDied","Data":"c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107"} Jan 30 09:37:40 crc kubenswrapper[4870]: I0130 09:37:40.790692 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerStarted","Data":"6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb"} Jan 30 09:37:40 crc kubenswrapper[4870]: I0130 09:37:40.828283 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gwxbl" podStartSLOduration=3.252498142 podStartE2EDuration="14.828251423s" podCreationTimestamp="2026-01-30 09:37:26 +0000 UTC" firstStartedPulling="2026-01-30 09:37:28.646384803 +0000 UTC m=+5287.341931922" lastFinishedPulling="2026-01-30 09:37:40.222138084 +0000 UTC m=+5298.917685203" observedRunningTime="2026-01-30 09:37:40.8147094 +0000 UTC m=+5299.510256499" watchObservedRunningTime="2026-01-30 09:37:40.828251423 +0000 UTC m=+5299.523798532" Jan 30 09:37:45 crc kubenswrapper[4870]: I0130 09:37:45.076067 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:37:45 crc kubenswrapper[4870]: E0130 09:37:45.077425 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:37:47 crc kubenswrapper[4870]: I0130 09:37:47.156402 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:47 crc kubenswrapper[4870]: I0130 09:37:47.158074 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:47 crc kubenswrapper[4870]: I0130 09:37:47.216230 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:47 crc kubenswrapper[4870]: I0130 09:37:47.922192 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:47 crc kubenswrapper[4870]: I0130 09:37:47.990133 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gwxbl"] Jan 30 09:37:49 crc kubenswrapper[4870]: I0130 09:37:49.883132 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gwxbl" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="registry-server" containerID="cri-o://6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb" gracePeriod=2 Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.384627 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.561397 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-utilities\") pod \"b6135191-d11b-46b6-9eaf-08a0ffb73387\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.561690 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv4ws\" (UniqueName: \"kubernetes.io/projected/b6135191-d11b-46b6-9eaf-08a0ffb73387-kube-api-access-xv4ws\") pod \"b6135191-d11b-46b6-9eaf-08a0ffb73387\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.561834 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-catalog-content\") pod \"b6135191-d11b-46b6-9eaf-08a0ffb73387\" (UID: \"b6135191-d11b-46b6-9eaf-08a0ffb73387\") " Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.562371 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-utilities" (OuterVolumeSpecName: "utilities") pod "b6135191-d11b-46b6-9eaf-08a0ffb73387" (UID: "b6135191-d11b-46b6-9eaf-08a0ffb73387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.573420 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6135191-d11b-46b6-9eaf-08a0ffb73387-kube-api-access-xv4ws" (OuterVolumeSpecName: "kube-api-access-xv4ws") pod "b6135191-d11b-46b6-9eaf-08a0ffb73387" (UID: "b6135191-d11b-46b6-9eaf-08a0ffb73387"). InnerVolumeSpecName "kube-api-access-xv4ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.664185 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.664233 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv4ws\" (UniqueName: \"kubernetes.io/projected/b6135191-d11b-46b6-9eaf-08a0ffb73387-kube-api-access-xv4ws\") on node \"crc\" DevicePath \"\"" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.702950 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6135191-d11b-46b6-9eaf-08a0ffb73387" (UID: "b6135191-d11b-46b6-9eaf-08a0ffb73387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.765736 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6135191-d11b-46b6-9eaf-08a0ffb73387-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.894271 4870 generic.go:334] "Generic (PLEG): container finished" podID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerID="6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb" exitCode=0 Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.894321 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerDied","Data":"6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb"} Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.894337 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gwxbl" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.894353 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gwxbl" event={"ID":"b6135191-d11b-46b6-9eaf-08a0ffb73387","Type":"ContainerDied","Data":"431c4158a94ca496e82d431c484f0b8025d2bfa0356402d185b46c4ba06c8cf3"} Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.894376 4870 scope.go:117] "RemoveContainer" containerID="6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.916837 4870 scope.go:117] "RemoveContainer" containerID="c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.968084 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gwxbl"] Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.970112 4870 scope.go:117] "RemoveContainer" containerID="a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.973690 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gwxbl"] Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.999187 4870 scope.go:117] "RemoveContainer" containerID="6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb" Jan 30 09:37:50 crc kubenswrapper[4870]: E0130 09:37:50.999745 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb\": container with ID starting with 6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb not found: ID does not exist" containerID="6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.999791 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb"} err="failed to get container status \"6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb\": rpc error: code = NotFound desc = could not find container \"6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb\": container with ID starting with 6339dc6602b8301690d5998649fb74d13d5b28888836b982f9059db82a0eeffb not found: ID does not exist" Jan 30 09:37:50 crc kubenswrapper[4870]: I0130 09:37:50.999835 4870 scope.go:117] "RemoveContainer" containerID="c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107" Jan 30 09:37:51 crc kubenswrapper[4870]: E0130 09:37:51.000447 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107\": container with ID starting with c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107 not found: ID does not exist" containerID="c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.000502 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107"} err="failed to get container status \"c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107\": rpc error: code = NotFound desc = could not find container \"c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107\": container with ID starting with c07b12bf71463e8c5f4fbf6d8677bba531fc8125d7076b8e0b538798477c2107 not found: ID does not exist" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.000539 4870 scope.go:117] "RemoveContainer" containerID="a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b" Jan 30 09:37:51 crc kubenswrapper[4870]: E0130 09:37:51.001003 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b\": container with ID starting with a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b not found: ID does not exist" containerID="a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.001090 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b"} err="failed to get container status \"a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b\": rpc error: code = NotFound desc = could not find container \"a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b\": container with ID starting with a5c124c31129055a8afcb9a374ab7820a1cd94d86d3fd14a6e54ef3129cc493b not found: ID does not exist" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.082725 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84sjp"] Jan 30 09:37:51 crc kubenswrapper[4870]: E0130 09:37:51.083304 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="extract-content" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.083330 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="extract-content" Jan 30 09:37:51 crc kubenswrapper[4870]: E0130 09:37:51.083356 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="extract-utilities" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.083365 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="extract-utilities" Jan 30 09:37:51 crc kubenswrapper[4870]: E0130 09:37:51.083410 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="registry-server" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.083427 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="registry-server" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.083668 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" containerName="registry-server" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.085565 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.116159 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84sjp"] Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.275904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79kd\" (UniqueName: \"kubernetes.io/projected/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-kube-api-access-f79kd\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.276292 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-utilities\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.276430 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-catalog-content\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.379215 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f79kd\" (UniqueName: \"kubernetes.io/projected/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-kube-api-access-f79kd\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.379334 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-utilities\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.379369 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-catalog-content\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.379908 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-catalog-content\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.380277 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-utilities\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.401908 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f79kd\" (UniqueName: \"kubernetes.io/projected/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-kube-api-access-f79kd\") pod \"certified-operators-84sjp\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:51 crc kubenswrapper[4870]: I0130 09:37:51.440459 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.049003 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84sjp"] Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.089467 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6135191-d11b-46b6-9eaf-08a0ffb73387" path="/var/lib/kubelet/pods/b6135191-d11b-46b6-9eaf-08a0ffb73387/volumes" Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.755006 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ltz5g_dfee5a53-cd5a-470f-9327-e614ff6e56b3/cert-manager-controller/0.log" Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.857370 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2hzbl_4e91c0f0-40df-495c-8758-892355565838/cert-manager-cainjector/0.log" Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.914557 4870 generic.go:334] "Generic (PLEG): container finished" podID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerID="3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e" exitCode=0 Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.914621 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerDied","Data":"3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e"} Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.914651 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerStarted","Data":"317d6101c3669f63b3e2482610f5f0422019c2ced984517b4c9c94f58e751f84"} Jan 30 09:37:52 crc kubenswrapper[4870]: I0130 09:37:52.976335 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-n5xzk_c3c8ba60-0b0f-4f22-9e7e-99b0dbc05ec1/cert-manager-webhook/0.log" Jan 30 09:37:54 crc kubenswrapper[4870]: I0130 09:37:54.936059 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerStarted","Data":"9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1"} Jan 30 09:37:55 crc kubenswrapper[4870]: I0130 09:37:55.975032 4870 generic.go:334] "Generic (PLEG): container finished" podID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerID="9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1" exitCode=0 Jan 30 09:37:55 crc kubenswrapper[4870]: I0130 09:37:55.975507 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerDied","Data":"9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1"} Jan 30 09:37:56 crc kubenswrapper[4870]: I0130 09:37:56.074869 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:37:56 crc kubenswrapper[4870]: I0130 09:37:56.987439 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"47da0905bfb9191daf20cd74642a93fa3691ef9a29f1507f03d471be1698c277"} Jan 30 09:37:56 crc kubenswrapper[4870]: I0130 09:37:56.990101 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerStarted","Data":"20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0"} Jan 30 09:38:01 crc kubenswrapper[4870]: I0130 09:38:01.440698 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:38:01 crc kubenswrapper[4870]: I0130 09:38:01.441353 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:38:01 crc kubenswrapper[4870]: I0130 09:38:01.489831 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:38:01 crc kubenswrapper[4870]: I0130 09:38:01.516295 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84sjp" podStartSLOduration=6.636568659 podStartE2EDuration="10.516273114s" podCreationTimestamp="2026-01-30 09:37:51 +0000 UTC" firstStartedPulling="2026-01-30 09:37:52.916541973 +0000 UTC m=+5311.612089072" lastFinishedPulling="2026-01-30 09:37:56.796246408 +0000 UTC m=+5315.491793527" observedRunningTime="2026-01-30 09:37:57.028435504 +0000 UTC m=+5315.723982623" watchObservedRunningTime="2026-01-30 09:38:01.516273114 +0000 UTC m=+5320.211820223" Jan 30 09:38:02 crc kubenswrapper[4870]: I0130 09:38:02.095329 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:38:02 crc kubenswrapper[4870]: I0130 09:38:02.167664 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84sjp"] Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.053341 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84sjp" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="registry-server" containerID="cri-o://20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0" gracePeriod=2 Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.598311 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.698751 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f79kd\" (UniqueName: \"kubernetes.io/projected/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-kube-api-access-f79kd\") pod \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.698919 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-utilities\") pod \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.699035 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-catalog-content\") pod \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\" (UID: \"9048c280-ecbd-4fcb-ac6f-4b095c6e3748\") " Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.701180 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-utilities" (OuterVolumeSpecName: "utilities") pod "9048c280-ecbd-4fcb-ac6f-4b095c6e3748" (UID: "9048c280-ecbd-4fcb-ac6f-4b095c6e3748"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.707028 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-kube-api-access-f79kd" (OuterVolumeSpecName: "kube-api-access-f79kd") pod "9048c280-ecbd-4fcb-ac6f-4b095c6e3748" (UID: "9048c280-ecbd-4fcb-ac6f-4b095c6e3748"). InnerVolumeSpecName "kube-api-access-f79kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.768769 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9048c280-ecbd-4fcb-ac6f-4b095c6e3748" (UID: "9048c280-ecbd-4fcb-ac6f-4b095c6e3748"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.802633 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f79kd\" (UniqueName: \"kubernetes.io/projected/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-kube-api-access-f79kd\") on node \"crc\" DevicePath \"\"" Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.802683 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:38:04 crc kubenswrapper[4870]: I0130 09:38:04.802699 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9048c280-ecbd-4fcb-ac6f-4b095c6e3748-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.066848 4870 generic.go:334] "Generic (PLEG): container finished" podID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerID="20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0" exitCode=0 Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.066922 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerDied","Data":"20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0"} Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.067173 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sjp" event={"ID":"9048c280-ecbd-4fcb-ac6f-4b095c6e3748","Type":"ContainerDied","Data":"317d6101c3669f63b3e2482610f5f0422019c2ced984517b4c9c94f58e751f84"} Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.067194 4870 scope.go:117] "RemoveContainer" containerID="20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.066990 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sjp" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.100795 4870 scope.go:117] "RemoveContainer" containerID="9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.117093 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84sjp"] Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.128339 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84sjp"] Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.611544 4870 scope.go:117] "RemoveContainer" containerID="3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.661386 4870 scope.go:117] "RemoveContainer" containerID="20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0" Jan 30 09:38:05 crc kubenswrapper[4870]: E0130 09:38:05.662149 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0\": container with ID starting with 20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0 not found: ID does not exist" containerID="20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.662177 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0"} err="failed to get container status \"20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0\": rpc error: code = NotFound desc = could not find container \"20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0\": container with ID starting with 20b69832fce8a3445fd0e656e0d2e965ca971155faff5aefe0571a8fb6d9d7a0 not found: ID does not exist" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.662197 4870 scope.go:117] "RemoveContainer" containerID="9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1" Jan 30 09:38:05 crc kubenswrapper[4870]: E0130 09:38:05.662568 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1\": container with ID starting with 9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1 not found: ID does not exist" containerID="9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.662584 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1"} err="failed to get container status \"9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1\": rpc error: code = NotFound desc = could not find container \"9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1\": container with ID starting with 9c563a1d2b675ea56ec79a4523094910d9938a668b434ecd3d19af4111aaa9f1 not found: ID does not exist" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.662597 4870 scope.go:117] "RemoveContainer" containerID="3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e" Jan 30 09:38:05 crc kubenswrapper[4870]: E0130 09:38:05.662884 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e\": container with ID starting with 3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e not found: ID does not exist" containerID="3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e" Jan 30 09:38:05 crc kubenswrapper[4870]: I0130 09:38:05.662981 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e"} err="failed to get container status \"3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e\": rpc error: code = NotFound desc = could not find container \"3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e\": container with ID starting with 3054befa2dad8228757064042af15055c5b09f7a8beefee27a231b2ae82c7a1e not found: ID does not exist" Jan 30 09:38:06 crc kubenswrapper[4870]: I0130 09:38:06.086430 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" path="/var/lib/kubelet/pods/9048c280-ecbd-4fcb-ac6f-4b095c6e3748/volumes" Jan 30 09:38:07 crc kubenswrapper[4870]: I0130 09:38:07.073055 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-ql9j9_b7e9a284-8b5c-4ae7-b388-3e9f907082d2/nmstate-console-plugin/0.log" Jan 30 09:38:07 crc kubenswrapper[4870]: I0130 09:38:07.235681 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tnl9h_f38692e7-8fd1-48e1-ab3b-07cbac975021/nmstate-handler/0.log" Jan 30 09:38:07 crc kubenswrapper[4870]: I0130 09:38:07.439390 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xdc74_86d16b9b-390e-442a-a74f-a9e32e92da59/nmstate-metrics/0.log" Jan 30 09:38:07 crc kubenswrapper[4870]: I0130 09:38:07.440140 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xdc74_86d16b9b-390e-442a-a74f-a9e32e92da59/kube-rbac-proxy/0.log" Jan 30 09:38:08 crc kubenswrapper[4870]: I0130 09:38:08.004743 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-sf8qk_bdb3e88d-691c-478c-ab03-cc84b8e04ea6/nmstate-operator/0.log" Jan 30 09:38:08 crc kubenswrapper[4870]: I0130 09:38:08.074458 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rsk45_06799197-023a-4ed3-a378-9a1fbf25fda2/nmstate-webhook/0.log" Jan 30 09:38:21 crc kubenswrapper[4870]: I0130 09:38:21.157711 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-hj2pn_614f63fc-ed66-41bb-b9fe-4229b3b67f50/prometheus-operator/0.log" Jan 30 09:38:21 crc kubenswrapper[4870]: I0130 09:38:21.280995 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf_1b8b459d-7a00-4e96-8916-4edd9fc87b99/prometheus-operator-admission-webhook/0.log" Jan 30 09:38:21 crc kubenswrapper[4870]: I0130 09:38:21.327975 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf5558b74-9clj8_586011b7-bc23-4a41-8795-bc28910cd170/prometheus-operator-admission-webhook/0.log" Jan 30 09:38:21 crc kubenswrapper[4870]: I0130 09:38:21.504134 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tmzq2_962cb597-f461-4983-b37a-a4c9e545f7d8/perses-operator/0.log" Jan 30 09:38:21 crc kubenswrapper[4870]: I0130 09:38:21.511945 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lv4dk_0f7d84eb-b450-4168-b207-22520fed3fd3/operator/0.log" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.645432 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gvzmr"] Jan 30 09:38:26 crc kubenswrapper[4870]: E0130 09:38:26.646648 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="extract-content" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.646667 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="extract-content" Jan 30 09:38:26 crc kubenswrapper[4870]: E0130 09:38:26.646690 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="extract-utilities" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.646699 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="extract-utilities" Jan 30 09:38:26 crc kubenswrapper[4870]: E0130 09:38:26.647129 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="registry-server" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.647143 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="registry-server" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.647417 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9048c280-ecbd-4fcb-ac6f-4b095c6e3748" containerName="registry-server" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.649298 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.655263 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvzmr"] Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.768705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-catalog-content\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.768813 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkvq\" (UniqueName: \"kubernetes.io/projected/2fe021a5-6534-4aad-aabc-da82e18587d6-kube-api-access-hzkvq\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.768832 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-utilities\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.871990 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-catalog-content\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.872174 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzkvq\" (UniqueName: \"kubernetes.io/projected/2fe021a5-6534-4aad-aabc-da82e18587d6-kube-api-access-hzkvq\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.872243 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-utilities\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.872719 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-catalog-content\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.873017 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-utilities\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.905391 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzkvq\" (UniqueName: \"kubernetes.io/projected/2fe021a5-6534-4aad-aabc-da82e18587d6-kube-api-access-hzkvq\") pod \"community-operators-gvzmr\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:26 crc kubenswrapper[4870]: I0130 09:38:26.966935 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:27 crc kubenswrapper[4870]: I0130 09:38:27.555431 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvzmr"] Jan 30 09:38:28 crc kubenswrapper[4870]: I0130 09:38:28.289639 4870 generic.go:334] "Generic (PLEG): container finished" podID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerID="d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6" exitCode=0 Jan 30 09:38:28 crc kubenswrapper[4870]: I0130 09:38:28.289752 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerDied","Data":"d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6"} Jan 30 09:38:28 crc kubenswrapper[4870]: I0130 09:38:28.290733 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerStarted","Data":"b44c001080442c74c291f4234f373f7dcd75b3d230046d851e0075a1d404593a"} Jan 30 09:38:28 crc kubenswrapper[4870]: I0130 09:38:28.291754 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:38:30 crc kubenswrapper[4870]: I0130 09:38:30.309520 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerStarted","Data":"25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a"} Jan 30 09:38:31 crc kubenswrapper[4870]: I0130 09:38:31.320577 4870 generic.go:334] "Generic (PLEG): container finished" podID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerID="25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a" exitCode=0 Jan 30 09:38:31 crc kubenswrapper[4870]: I0130 09:38:31.320745 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerDied","Data":"25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a"} Jan 30 09:38:32 crc kubenswrapper[4870]: I0130 09:38:32.333187 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerStarted","Data":"7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747"} Jan 30 09:38:32 crc kubenswrapper[4870]: I0130 09:38:32.356199 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gvzmr" podStartSLOduration=2.83506848 podStartE2EDuration="6.356178268s" podCreationTimestamp="2026-01-30 09:38:26 +0000 UTC" firstStartedPulling="2026-01-30 09:38:28.291485252 +0000 UTC m=+5346.987032361" lastFinishedPulling="2026-01-30 09:38:31.81259504 +0000 UTC m=+5350.508142149" observedRunningTime="2026-01-30 09:38:32.351161931 +0000 UTC m=+5351.046709040" watchObservedRunningTime="2026-01-30 09:38:32.356178268 +0000 UTC m=+5351.051725377" Jan 30 09:38:36 crc kubenswrapper[4870]: I0130 09:38:36.795893 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2dwrk_b8c43bdb-2bfa-445b-9526-a03eb3f3ca20/kube-rbac-proxy/0.log" Jan 30 09:38:36 crc kubenswrapper[4870]: I0130 09:38:36.867278 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-2dwrk_b8c43bdb-2bfa-445b-9526-a03eb3f3ca20/controller/0.log" Jan 30 09:38:36 crc kubenswrapper[4870]: I0130 09:38:36.967682 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:36 crc kubenswrapper[4870]: I0130 09:38:36.967740 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.004493 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-pnddd_5d3d6557-5b19-47c3-9e81-09b8dee3b239/frr-k8s-webhook-server/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.023508 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.094774 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-frr-files/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.267909 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-reloader/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.274330 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-frr-files/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.307910 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-reloader/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.329666 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-metrics/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.437550 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.492368 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvzmr"] Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.509095 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-reloader/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.553480 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-metrics/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.608978 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-frr-files/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.617213 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-metrics/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.806536 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-metrics/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.812232 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/controller/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.836488 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-frr-files/0.log" Jan 30 09:38:37 crc kubenswrapper[4870]: I0130 09:38:37.839445 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/cp-reloader/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.008095 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/frr-metrics/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.038567 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/kube-rbac-proxy/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.045227 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/kube-rbac-proxy-frr/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.226723 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/reloader/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.317806 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-567987c4fc-ff527_70a9e498-4f2a-40ff-8837-7811ffe26e2d/manager/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.567939 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-d5db5fbbd-k8pwt_f01bc9ba-9427-4c0a-927e-56b20aca72c5/webhook-server/0.log" Jan 30 09:38:38 crc kubenswrapper[4870]: I0130 09:38:38.803843 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7q5pn_84099c66-a13e-4949-ae36-7fa85a6a6a56/kube-rbac-proxy/0.log" Jan 30 09:38:39 crc kubenswrapper[4870]: I0130 09:38:39.316622 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7q5pn_84099c66-a13e-4949-ae36-7fa85a6a6a56/speaker/0.log" Jan 30 09:38:39 crc kubenswrapper[4870]: I0130 09:38:39.397691 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gvzmr" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="registry-server" containerID="cri-o://7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747" gracePeriod=2 Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.000677 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.022405 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwhkv_008f589d-dab4-42af-9a42-cb6c00737f44/frr/0.log" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.072183 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzkvq\" (UniqueName: \"kubernetes.io/projected/2fe021a5-6534-4aad-aabc-da82e18587d6-kube-api-access-hzkvq\") pod \"2fe021a5-6534-4aad-aabc-da82e18587d6\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.072230 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-catalog-content\") pod \"2fe021a5-6534-4aad-aabc-da82e18587d6\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.072363 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-utilities\") pod \"2fe021a5-6534-4aad-aabc-da82e18587d6\" (UID: \"2fe021a5-6534-4aad-aabc-da82e18587d6\") " Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.074641 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-utilities" (OuterVolumeSpecName: "utilities") pod "2fe021a5-6534-4aad-aabc-da82e18587d6" (UID: "2fe021a5-6534-4aad-aabc-da82e18587d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.089968 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe021a5-6534-4aad-aabc-da82e18587d6-kube-api-access-hzkvq" (OuterVolumeSpecName: "kube-api-access-hzkvq") pod "2fe021a5-6534-4aad-aabc-da82e18587d6" (UID: "2fe021a5-6534-4aad-aabc-da82e18587d6"). InnerVolumeSpecName "kube-api-access-hzkvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.145995 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fe021a5-6534-4aad-aabc-da82e18587d6" (UID: "2fe021a5-6534-4aad-aabc-da82e18587d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.175281 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzkvq\" (UniqueName: \"kubernetes.io/projected/2fe021a5-6534-4aad-aabc-da82e18587d6-kube-api-access-hzkvq\") on node \"crc\" DevicePath \"\"" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.175327 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.175342 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe021a5-6534-4aad-aabc-da82e18587d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.408605 4870 generic.go:334] "Generic (PLEG): container finished" podID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerID="7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747" exitCode=0 Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.408678 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerDied","Data":"7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747"} Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.409085 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvzmr" event={"ID":"2fe021a5-6534-4aad-aabc-da82e18587d6","Type":"ContainerDied","Data":"b44c001080442c74c291f4234f373f7dcd75b3d230046d851e0075a1d404593a"} Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.409116 4870 scope.go:117] "RemoveContainer" containerID="7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.408711 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvzmr" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.431570 4870 scope.go:117] "RemoveContainer" containerID="25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.451393 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvzmr"] Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.465851 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gvzmr"] Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.470008 4870 scope.go:117] "RemoveContainer" containerID="d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.543257 4870 scope.go:117] "RemoveContainer" containerID="7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747" Jan 30 09:38:40 crc kubenswrapper[4870]: E0130 09:38:40.543747 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747\": container with ID starting with 7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747 not found: ID does not exist" containerID="7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.543791 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747"} err="failed to get container status \"7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747\": rpc error: code = NotFound desc = could not find container \"7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747\": container with ID starting with 7a92b1340513f77963339b6db5fd2c460124ab2a79c0e1a016074e513a325747 not found: ID does not exist" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.543818 4870 scope.go:117] "RemoveContainer" containerID="25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a" Jan 30 09:38:40 crc kubenswrapper[4870]: E0130 09:38:40.545324 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a\": container with ID starting with 25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a not found: ID does not exist" containerID="25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.545359 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a"} err="failed to get container status \"25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a\": rpc error: code = NotFound desc = could not find container \"25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a\": container with ID starting with 25bbe5473f8721e2d80baefa6abb9b0a74613b494e6b78293367482ab21a0a2a not found: ID does not exist" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.545378 4870 scope.go:117] "RemoveContainer" containerID="d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6" Jan 30 09:38:40 crc kubenswrapper[4870]: E0130 09:38:40.545719 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6\": container with ID starting with d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6 not found: ID does not exist" containerID="d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6" Jan 30 09:38:40 crc kubenswrapper[4870]: I0130 09:38:40.545777 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6"} err="failed to get container status \"d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6\": rpc error: code = NotFound desc = could not find container \"d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6\": container with ID starting with d4c68d2b61af4455d80436ab2cd4e5bb21e57b71e94a4b798fd7f012135b08c6 not found: ID does not exist" Jan 30 09:38:42 crc kubenswrapper[4870]: I0130 09:38:42.092780 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" path="/var/lib/kubelet/pods/2fe021a5-6534-4aad-aabc-da82e18587d6/volumes" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.119868 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/util/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.396895 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/util/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.400489 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/pull/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.400578 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/pull/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.586542 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/extract/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.624300 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/pull/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.641323 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5k6pp_f4f12d31-27df-4d6e-a1cb-64ed2b79d9ef/util/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.785621 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/util/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.953186 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/pull/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.962256 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/pull/0.log" Jan 30 09:38:53 crc kubenswrapper[4870]: I0130 09:38:53.984373 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/util/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.136355 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/util/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.146602 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/pull/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.179229 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713s5s5m_e702b53f-5799-4595-b78f-35717f81379f/extract/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.655946 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/util/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.866527 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/util/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.910846 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/pull/0.log" Jan 30 09:38:54 crc kubenswrapper[4870]: I0130 09:38:54.916652 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/pull/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.085332 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/pull/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.107015 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/util/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.133445 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lkw7m_69895f16-2797-4fd7-aedf-54fc47cd2c4f/extract/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.272202 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/extract-utilities/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.456407 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/extract-content/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.458086 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/extract-utilities/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.486701 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/extract-content/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.611362 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/extract-utilities/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.678115 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/extract-content/0.log" Jan 30 09:38:55 crc kubenswrapper[4870]: I0130 09:38:55.831803 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/extract-utilities/0.log" Jan 30 09:38:56 crc kubenswrapper[4870]: I0130 09:38:56.022825 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/extract-utilities/0.log" Jan 30 09:38:56 crc kubenswrapper[4870]: I0130 09:38:56.079951 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/extract-content/0.log" Jan 30 09:38:56 crc kubenswrapper[4870]: I0130 09:38:56.094657 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/extract-content/0.log" Jan 30 09:38:56 crc kubenswrapper[4870]: I0130 09:38:56.284514 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whfhw_ea80eb92-6881-4e69-8ca2-050d32254eb7/registry-server/0.log" Jan 30 09:38:56 crc kubenswrapper[4870]: I0130 09:38:56.756325 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/extract-utilities/0.log" Jan 30 09:38:56 crc kubenswrapper[4870]: I0130 09:38:56.792099 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/extract-content/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.097766 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/extract-utilities/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.112358 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vkhzd_83d46dd9-5ab7-44c9-b032-1241911b6d82/marketplace-operator/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.386314 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/extract-utilities/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.396971 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/extract-content/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.406490 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/extract-content/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.616784 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/extract-utilities/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.758752 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/extract-content/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.837335 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqxgq_d7b3d065-5057-49c1-be84-7880d7d4d619/registry-server/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.886992 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/extract-utilities/0.log" Jan 30 09:38:57 crc kubenswrapper[4870]: I0130 09:38:57.904021 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-glxrr_b1839882-74e1-4c94-9d83-849d10c41089/registry-server/0.log" Jan 30 09:38:58 crc kubenswrapper[4870]: I0130 09:38:58.030133 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/extract-utilities/0.log" Jan 30 09:38:58 crc kubenswrapper[4870]: I0130 09:38:58.030390 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/extract-content/0.log" Jan 30 09:38:58 crc kubenswrapper[4870]: I0130 09:38:58.078186 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/extract-content/0.log" Jan 30 09:38:58 crc kubenswrapper[4870]: I0130 09:38:58.216820 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/extract-utilities/0.log" Jan 30 09:38:58 crc kubenswrapper[4870]: I0130 09:38:58.252327 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/extract-content/0.log" Jan 30 09:38:59 crc kubenswrapper[4870]: I0130 09:38:59.074893 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8dmqx_71b77216-d7c7-4a69-8596-e64fd99129c6/registry-server/0.log" Jan 30 09:39:11 crc kubenswrapper[4870]: I0130 09:39:11.465619 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf5558b74-9clj8_586011b7-bc23-4a41-8795-bc28910cd170/prometheus-operator-admission-webhook/0.log" Jan 30 09:39:11 crc kubenswrapper[4870]: I0130 09:39:11.468045 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf5558b74-8wxnf_1b8b459d-7a00-4e96-8916-4edd9fc87b99/prometheus-operator-admission-webhook/0.log" Jan 30 09:39:11 crc kubenswrapper[4870]: I0130 09:39:11.477654 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-hj2pn_614f63fc-ed66-41bb-b9fe-4229b3b67f50/prometheus-operator/0.log" Jan 30 09:39:11 crc kubenswrapper[4870]: I0130 09:39:11.684666 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lv4dk_0f7d84eb-b450-4168-b207-22520fed3fd3/operator/0.log" Jan 30 09:39:11 crc kubenswrapper[4870]: I0130 09:39:11.691621 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tmzq2_962cb597-f461-4983-b37a-a4c9e545f7d8/perses-operator/0.log" Jan 30 09:40:25 crc kubenswrapper[4870]: I0130 09:40:25.249474 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:40:25 crc kubenswrapper[4870]: I0130 09:40:25.250023 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:40:55 crc kubenswrapper[4870]: I0130 09:40:55.249273 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:40:55 crc kubenswrapper[4870]: I0130 09:40:55.251964 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:41:22 crc kubenswrapper[4870]: I0130 09:41:22.067803 4870 generic.go:334] "Generic (PLEG): container finished" podID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerID="6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879" exitCode=0 Jan 30 09:41:22 crc kubenswrapper[4870]: I0130 09:41:22.067902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" event={"ID":"6769b74f-20a7-48a8-b39b-d812418dbab4","Type":"ContainerDied","Data":"6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879"} Jan 30 09:41:22 crc kubenswrapper[4870]: I0130 09:41:22.069133 4870 scope.go:117] "RemoveContainer" containerID="6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879" Jan 30 09:41:23 crc kubenswrapper[4870]: I0130 09:41:23.069714 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngvkt_must-gather-jl6kn_6769b74f-20a7-48a8-b39b-d812418dbab4/gather/0.log" Jan 30 09:41:25 crc kubenswrapper[4870]: I0130 09:41:25.250395 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:41:25 crc kubenswrapper[4870]: I0130 09:41:25.250666 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:41:25 crc kubenswrapper[4870]: I0130 09:41:25.250706 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:41:25 crc kubenswrapper[4870]: I0130 09:41:25.251466 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47da0905bfb9191daf20cd74642a93fa3691ef9a29f1507f03d471be1698c277"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:41:25 crc kubenswrapper[4870]: I0130 09:41:25.251519 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://47da0905bfb9191daf20cd74642a93fa3691ef9a29f1507f03d471be1698c277" gracePeriod=600 Jan 30 09:41:26 crc kubenswrapper[4870]: I0130 09:41:26.107250 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="47da0905bfb9191daf20cd74642a93fa3691ef9a29f1507f03d471be1698c277" exitCode=0 Jan 30 09:41:26 crc kubenswrapper[4870]: I0130 09:41:26.107334 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"47da0905bfb9191daf20cd74642a93fa3691ef9a29f1507f03d471be1698c277"} Jan 30 09:41:26 crc kubenswrapper[4870]: I0130 09:41:26.108316 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerStarted","Data":"6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f"} Jan 30 09:41:26 crc kubenswrapper[4870]: I0130 09:41:26.108493 4870 scope.go:117] "RemoveContainer" containerID="7f9a0e2b7a855a7780193a1b5ddeb2a0d89688bd487b6861d08256fc31412f40" Jan 30 09:41:31 crc kubenswrapper[4870]: I0130 09:41:31.569268 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngvkt/must-gather-jl6kn"] Jan 30 09:41:31 crc kubenswrapper[4870]: I0130 09:41:31.570195 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="copy" containerID="cri-o://94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868" gracePeriod=2 Jan 30 09:41:31 crc kubenswrapper[4870]: I0130 09:41:31.580425 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngvkt/must-gather-jl6kn"] Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.067223 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngvkt_must-gather-jl6kn_6769b74f-20a7-48a8-b39b-d812418dbab4/copy/0.log" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.067965 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.181835 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltpzc\" (UniqueName: \"kubernetes.io/projected/6769b74f-20a7-48a8-b39b-d812418dbab4-kube-api-access-ltpzc\") pod \"6769b74f-20a7-48a8-b39b-d812418dbab4\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.181939 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6769b74f-20a7-48a8-b39b-d812418dbab4-must-gather-output\") pod \"6769b74f-20a7-48a8-b39b-d812418dbab4\" (UID: \"6769b74f-20a7-48a8-b39b-d812418dbab4\") " Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.203861 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6769b74f-20a7-48a8-b39b-d812418dbab4-kube-api-access-ltpzc" (OuterVolumeSpecName: "kube-api-access-ltpzc") pod "6769b74f-20a7-48a8-b39b-d812418dbab4" (UID: "6769b74f-20a7-48a8-b39b-d812418dbab4"). InnerVolumeSpecName "kube-api-access-ltpzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.209019 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngvkt_must-gather-jl6kn_6769b74f-20a7-48a8-b39b-d812418dbab4/copy/0.log" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.215533 4870 generic.go:334] "Generic (PLEG): container finished" podID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerID="94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868" exitCode=143 Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.215602 4870 scope.go:117] "RemoveContainer" containerID="94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.215611 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngvkt/must-gather-jl6kn" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.253646 4870 scope.go:117] "RemoveContainer" containerID="6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.289150 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltpzc\" (UniqueName: \"kubernetes.io/projected/6769b74f-20a7-48a8-b39b-d812418dbab4-kube-api-access-ltpzc\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.343296 4870 scope.go:117] "RemoveContainer" containerID="94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868" Jan 30 09:41:32 crc kubenswrapper[4870]: E0130 09:41:32.343656 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868\": container with ID starting with 94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868 not found: ID does not exist" containerID="94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.343703 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868"} err="failed to get container status \"94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868\": rpc error: code = NotFound desc = could not find container \"94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868\": container with ID starting with 94b1b42bc8f7c336d8fcdcb61cfce9aebe53e0b19d58671be8fe96811a489868 not found: ID does not exist" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.343732 4870 scope.go:117] "RemoveContainer" containerID="6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879" Jan 30 09:41:32 crc kubenswrapper[4870]: E0130 09:41:32.344027 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879\": container with ID starting with 6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879 not found: ID does not exist" containerID="6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.344056 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879"} err="failed to get container status \"6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879\": rpc error: code = NotFound desc = could not find container \"6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879\": container with ID starting with 6fb7d80f8c08e636076085f11ddc05d718f9baf6df6e1017b15952720a016879 not found: ID does not exist" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.438971 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6769b74f-20a7-48a8-b39b-d812418dbab4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6769b74f-20a7-48a8-b39b-d812418dbab4" (UID: "6769b74f-20a7-48a8-b39b-d812418dbab4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:41:32 crc kubenswrapper[4870]: I0130 09:41:32.493313 4870 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6769b74f-20a7-48a8-b39b-d812418dbab4-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:34 crc kubenswrapper[4870]: I0130 09:41:34.098539 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" path="/var/lib/kubelet/pods/6769b74f-20a7-48a8-b39b-d812418dbab4/volumes" Jan 30 09:41:39 crc kubenswrapper[4870]: I0130 09:41:39.660978 4870 scope.go:117] "RemoveContainer" containerID="c10769db3e913e00355f8966e729b5b6b9071d652adf30ce568e54dc81b0dfbf" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.765416 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z2j9g"] Jan 30 09:43:15 crc kubenswrapper[4870]: E0130 09:43:15.767138 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="copy" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767155 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="copy" Jan 30 09:43:15 crc kubenswrapper[4870]: E0130 09:43:15.767177 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="extract-content" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767185 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="extract-content" Jan 30 09:43:15 crc kubenswrapper[4870]: E0130 09:43:15.767204 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="extract-utilities" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767215 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="extract-utilities" Jan 30 09:43:15 crc kubenswrapper[4870]: E0130 09:43:15.767226 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="registry-server" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767233 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="registry-server" Jan 30 09:43:15 crc kubenswrapper[4870]: E0130 09:43:15.767253 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="gather" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767260 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="gather" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767569 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="gather" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767591 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe021a5-6534-4aad-aabc-da82e18587d6" containerName="registry-server" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.767614 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6769b74f-20a7-48a8-b39b-d812418dbab4" containerName="copy" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.769512 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.778276 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2j9g"] Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.822186 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-utilities\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.822534 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88xbj\" (UniqueName: \"kubernetes.io/projected/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-kube-api-access-88xbj\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.822667 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-catalog-content\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.924397 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-catalog-content\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.924613 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-utilities\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.924673 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88xbj\" (UniqueName: \"kubernetes.io/projected/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-kube-api-access-88xbj\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.925257 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-utilities\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.925387 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-catalog-content\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:15 crc kubenswrapper[4870]: I0130 09:43:15.951198 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88xbj\" (UniqueName: \"kubernetes.io/projected/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-kube-api-access-88xbj\") pod \"redhat-marketplace-z2j9g\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:16 crc kubenswrapper[4870]: I0130 09:43:16.099951 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:16 crc kubenswrapper[4870]: I0130 09:43:16.399097 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2j9g"] Jan 30 09:43:17 crc kubenswrapper[4870]: I0130 09:43:17.278519 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerID="5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f" exitCode=0 Jan 30 09:43:17 crc kubenswrapper[4870]: I0130 09:43:17.278753 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerDied","Data":"5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f"} Jan 30 09:43:17 crc kubenswrapper[4870]: I0130 09:43:17.278916 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerStarted","Data":"ce47c6d811f8804946b6cd93e6bc8bd97e403fdb31b9759bb3ce8bab5f26510f"} Jan 30 09:43:18 crc kubenswrapper[4870]: I0130 09:43:18.292798 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerStarted","Data":"bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466"} Jan 30 09:43:19 crc kubenswrapper[4870]: I0130 09:43:19.304416 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerID="bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466" exitCode=0 Jan 30 09:43:19 crc kubenswrapper[4870]: I0130 09:43:19.305806 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerDied","Data":"bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466"} Jan 30 09:43:20 crc kubenswrapper[4870]: I0130 09:43:20.319332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerStarted","Data":"ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5"} Jan 30 09:43:20 crc kubenswrapper[4870]: I0130 09:43:20.352814 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z2j9g" podStartSLOduration=2.93440147 podStartE2EDuration="5.352791027s" podCreationTimestamp="2026-01-30 09:43:15 +0000 UTC" firstStartedPulling="2026-01-30 09:43:17.281697332 +0000 UTC m=+5635.977244441" lastFinishedPulling="2026-01-30 09:43:19.700086859 +0000 UTC m=+5638.395633998" observedRunningTime="2026-01-30 09:43:20.345265032 +0000 UTC m=+5639.040812141" watchObservedRunningTime="2026-01-30 09:43:20.352791027 +0000 UTC m=+5639.048338156" Jan 30 09:43:25 crc kubenswrapper[4870]: I0130 09:43:25.250225 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:43:25 crc kubenswrapper[4870]: I0130 09:43:25.251157 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:43:26 crc kubenswrapper[4870]: I0130 09:43:26.100448 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:26 crc kubenswrapper[4870]: I0130 09:43:26.100808 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:26 crc kubenswrapper[4870]: I0130 09:43:26.145460 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:26 crc kubenswrapper[4870]: I0130 09:43:26.474218 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:26 crc kubenswrapper[4870]: I0130 09:43:26.525786 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2j9g"] Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.403563 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z2j9g" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="registry-server" containerID="cri-o://ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5" gracePeriod=2 Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.872038 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.907570 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-utilities\") pod \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.907761 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-catalog-content\") pod \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.908082 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88xbj\" (UniqueName: \"kubernetes.io/projected/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-kube-api-access-88xbj\") pod \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\" (UID: \"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf\") " Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.909222 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-utilities" (OuterVolumeSpecName: "utilities") pod "2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" (UID: "2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.913638 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-kube-api-access-88xbj" (OuterVolumeSpecName: "kube-api-access-88xbj") pod "2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" (UID: "2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf"). InnerVolumeSpecName "kube-api-access-88xbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:43:28 crc kubenswrapper[4870]: I0130 09:43:28.969358 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" (UID: "2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.011077 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88xbj\" (UniqueName: \"kubernetes.io/projected/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-kube-api-access-88xbj\") on node \"crc\" DevicePath \"\"" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.011120 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.011134 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.417380 4870 generic.go:334] "Generic (PLEG): container finished" podID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerID="ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5" exitCode=0 Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.417459 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerDied","Data":"ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5"} Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.417488 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2j9g" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.417684 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2j9g" event={"ID":"2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf","Type":"ContainerDied","Data":"ce47c6d811f8804946b6cd93e6bc8bd97e403fdb31b9759bb3ce8bab5f26510f"} Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.417701 4870 scope.go:117] "RemoveContainer" containerID="ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.437343 4870 scope.go:117] "RemoveContainer" containerID="bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.457749 4870 scope.go:117] "RemoveContainer" containerID="5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.538187 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2j9g"] Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.546975 4870 scope.go:117] "RemoveContainer" containerID="ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5" Jan 30 09:43:29 crc kubenswrapper[4870]: E0130 09:43:29.547476 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5\": container with ID starting with ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5 not found: ID does not exist" containerID="ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.547527 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5"} err="failed to get container status \"ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5\": rpc error: code = NotFound desc = could not find container \"ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5\": container with ID starting with ca8c25f1254aa5ab54be39ce1a4ac849e05021e88a045e105c579b053b2866d5 not found: ID does not exist" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.547560 4870 scope.go:117] "RemoveContainer" containerID="bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466" Jan 30 09:43:29 crc kubenswrapper[4870]: E0130 09:43:29.548112 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466\": container with ID starting with bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466 not found: ID does not exist" containerID="bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.548138 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466"} err="failed to get container status \"bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466\": rpc error: code = NotFound desc = could not find container \"bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466\": container with ID starting with bf7407dac474c4d4117f3382db3f03680b3b99f686b344d5f1e5b6698ae64466 not found: ID does not exist" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.548156 4870 scope.go:117] "RemoveContainer" containerID="5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f" Jan 30 09:43:29 crc kubenswrapper[4870]: E0130 09:43:29.548642 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f\": container with ID starting with 5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f not found: ID does not exist" containerID="5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.548677 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f"} err="failed to get container status \"5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f\": rpc error: code = NotFound desc = could not find container \"5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f\": container with ID starting with 5cfc010005068388a1db08f2050d5e3b191c08b875c5d8cb2d42dbb809ed767f not found: ID does not exist" Jan 30 09:43:29 crc kubenswrapper[4870]: I0130 09:43:29.550996 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2j9g"] Jan 30 09:43:30 crc kubenswrapper[4870]: I0130 09:43:30.086283 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" path="/var/lib/kubelet/pods/2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf/volumes" Jan 30 09:43:55 crc kubenswrapper[4870]: I0130 09:43:55.249304 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:43:55 crc kubenswrapper[4870]: I0130 09:43:55.250025 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.249683 4870 patch_prober.go:28] interesting pod/machine-config-daemon-j4sd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.250402 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.250456 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.251120 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f"} pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.251175 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerName="machine-config-daemon" containerID="cri-o://6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" gracePeriod=600 Jan 30 09:44:25 crc kubenswrapper[4870]: E0130 09:44:25.385979 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.998806 4870 generic.go:334] "Generic (PLEG): container finished" podID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" exitCode=0 Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.998865 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" event={"ID":"5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d","Type":"ContainerDied","Data":"6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f"} Jan 30 09:44:25 crc kubenswrapper[4870]: I0130 09:44:25.998938 4870 scope.go:117] "RemoveContainer" containerID="47da0905bfb9191daf20cd74642a93fa3691ef9a29f1507f03d471be1698c277" Jan 30 09:44:26 crc kubenswrapper[4870]: I0130 09:44:26.000404 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:44:26 crc kubenswrapper[4870]: E0130 09:44:26.001421 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:44:37 crc kubenswrapper[4870]: I0130 09:44:37.074867 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:44:37 crc kubenswrapper[4870]: E0130 09:44:37.076013 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:44:49 crc kubenswrapper[4870]: I0130 09:44:49.075404 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:44:49 crc kubenswrapper[4870]: E0130 09:44:49.076759 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.075284 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:45:00 crc kubenswrapper[4870]: E0130 09:45:00.077536 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.161966 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx"] Jan 30 09:45:00 crc kubenswrapper[4870]: E0130 09:45:00.162513 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.162539 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4870]: E0130 09:45:00.162556 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="extract-content" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.162564 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="extract-content" Jan 30 09:45:00 crc kubenswrapper[4870]: E0130 09:45:00.162585 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="extract-utilities" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.162593 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="extract-utilities" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.162847 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed1bee2-d571-4dcc-8ba9-f4257bb5dacf" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.163738 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.168634 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.168642 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.175416 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx"] Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.211860 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54d93b9e-2b65-40bb-81f7-134f3ce2d101-secret-volume\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.212050 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d93b9e-2b65-40bb-81f7-134f3ce2d101-config-volume\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.212338 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnwp\" (UniqueName: \"kubernetes.io/projected/54d93b9e-2b65-40bb-81f7-134f3ce2d101-kube-api-access-xwnwp\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.314304 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54d93b9e-2b65-40bb-81f7-134f3ce2d101-secret-volume\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.314356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d93b9e-2b65-40bb-81f7-134f3ce2d101-config-volume\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.314423 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnwp\" (UniqueName: \"kubernetes.io/projected/54d93b9e-2b65-40bb-81f7-134f3ce2d101-kube-api-access-xwnwp\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.315658 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d93b9e-2b65-40bb-81f7-134f3ce2d101-config-volume\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.335981 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54d93b9e-2b65-40bb-81f7-134f3ce2d101-secret-volume\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.341099 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnwp\" (UniqueName: \"kubernetes.io/projected/54d93b9e-2b65-40bb-81f7-134f3ce2d101-kube-api-access-xwnwp\") pod \"collect-profiles-29496105-jk4bx\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.486778 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:00 crc kubenswrapper[4870]: I0130 09:45:00.936888 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx"] Jan 30 09:45:01 crc kubenswrapper[4870]: I0130 09:45:01.452749 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" event={"ID":"54d93b9e-2b65-40bb-81f7-134f3ce2d101","Type":"ContainerStarted","Data":"202b2d053a72796ab3db27f1b87308f2dcc900d8af762a773e5d8fc1878c82e2"} Jan 30 09:45:01 crc kubenswrapper[4870]: I0130 09:45:01.453395 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" event={"ID":"54d93b9e-2b65-40bb-81f7-134f3ce2d101","Type":"ContainerStarted","Data":"50620433ec0edc768d17bbe6ad833e170a77e2cd498e1f06578e383e29a8cb5d"} Jan 30 09:45:01 crc kubenswrapper[4870]: I0130 09:45:01.490867 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" podStartSLOduration=1.490848613 podStartE2EDuration="1.490848613s" podCreationTimestamp="2026-01-30 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:45:01.476771343 +0000 UTC m=+5740.172318462" watchObservedRunningTime="2026-01-30 09:45:01.490848613 +0000 UTC m=+5740.186395722" Jan 30 09:45:02 crc kubenswrapper[4870]: I0130 09:45:02.462864 4870 generic.go:334] "Generic (PLEG): container finished" podID="54d93b9e-2b65-40bb-81f7-134f3ce2d101" containerID="202b2d053a72796ab3db27f1b87308f2dcc900d8af762a773e5d8fc1878c82e2" exitCode=0 Jan 30 09:45:02 crc kubenswrapper[4870]: I0130 09:45:02.462922 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" event={"ID":"54d93b9e-2b65-40bb-81f7-134f3ce2d101","Type":"ContainerDied","Data":"202b2d053a72796ab3db27f1b87308f2dcc900d8af762a773e5d8fc1878c82e2"} Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.849940 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.893509 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnwp\" (UniqueName: \"kubernetes.io/projected/54d93b9e-2b65-40bb-81f7-134f3ce2d101-kube-api-access-xwnwp\") pod \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.893659 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54d93b9e-2b65-40bb-81f7-134f3ce2d101-secret-volume\") pod \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.893714 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d93b9e-2b65-40bb-81f7-134f3ce2d101-config-volume\") pod \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\" (UID: \"54d93b9e-2b65-40bb-81f7-134f3ce2d101\") " Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.894603 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54d93b9e-2b65-40bb-81f7-134f3ce2d101-config-volume" (OuterVolumeSpecName: "config-volume") pod "54d93b9e-2b65-40bb-81f7-134f3ce2d101" (UID: "54d93b9e-2b65-40bb-81f7-134f3ce2d101"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.899533 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d93b9e-2b65-40bb-81f7-134f3ce2d101-kube-api-access-xwnwp" (OuterVolumeSpecName: "kube-api-access-xwnwp") pod "54d93b9e-2b65-40bb-81f7-134f3ce2d101" (UID: "54d93b9e-2b65-40bb-81f7-134f3ce2d101"). InnerVolumeSpecName "kube-api-access-xwnwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.899764 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d93b9e-2b65-40bb-81f7-134f3ce2d101-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54d93b9e-2b65-40bb-81f7-134f3ce2d101" (UID: "54d93b9e-2b65-40bb-81f7-134f3ce2d101"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.996203 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnwp\" (UniqueName: \"kubernetes.io/projected/54d93b9e-2b65-40bb-81f7-134f3ce2d101-kube-api-access-xwnwp\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.996238 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54d93b9e-2b65-40bb-81f7-134f3ce2d101-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4870]: I0130 09:45:03.996249 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d93b9e-2b65-40bb-81f7-134f3ce2d101-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:04 crc kubenswrapper[4870]: I0130 09:45:04.486400 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" event={"ID":"54d93b9e-2b65-40bb-81f7-134f3ce2d101","Type":"ContainerDied","Data":"50620433ec0edc768d17bbe6ad833e170a77e2cd498e1f06578e383e29a8cb5d"} Jan 30 09:45:04 crc kubenswrapper[4870]: I0130 09:45:04.486447 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50620433ec0edc768d17bbe6ad833e170a77e2cd498e1f06578e383e29a8cb5d" Jan 30 09:45:04 crc kubenswrapper[4870]: I0130 09:45:04.486509 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-jk4bx" Jan 30 09:45:04 crc kubenswrapper[4870]: I0130 09:45:04.577011 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb"] Jan 30 09:45:04 crc kubenswrapper[4870]: I0130 09:45:04.585036 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-gq9mb"] Jan 30 09:45:06 crc kubenswrapper[4870]: I0130 09:45:06.088510 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f537a705-b98d-4cc1-8fba-f9fb4145fc33" path="/var/lib/kubelet/pods/f537a705-b98d-4cc1-8fba-f9fb4145fc33/volumes" Jan 30 09:45:14 crc kubenswrapper[4870]: I0130 09:45:14.075400 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:45:14 crc kubenswrapper[4870]: E0130 09:45:14.076188 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:45:25 crc kubenswrapper[4870]: I0130 09:45:25.074781 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:45:25 crc kubenswrapper[4870]: E0130 09:45:25.075738 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:45:39 crc kubenswrapper[4870]: I0130 09:45:39.075012 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:45:39 crc kubenswrapper[4870]: E0130 09:45:39.075781 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:45:39 crc kubenswrapper[4870]: I0130 09:45:39.804835 4870 scope.go:117] "RemoveContainer" containerID="21570fbd391aa6805bfee83f36df9ca917daf03782908d47cd7dd4eedf90e176" Jan 30 09:45:52 crc kubenswrapper[4870]: I0130 09:45:52.088777 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:45:52 crc kubenswrapper[4870]: E0130 09:45:52.092277 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:46:06 crc kubenswrapper[4870]: I0130 09:46:06.075051 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:46:06 crc kubenswrapper[4870]: E0130 09:46:06.075849 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:46:19 crc kubenswrapper[4870]: I0130 09:46:19.075241 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:46:19 crc kubenswrapper[4870]: E0130 09:46:19.077653 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:46:34 crc kubenswrapper[4870]: I0130 09:46:34.075495 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:46:34 crc kubenswrapper[4870]: E0130 09:46:34.076295 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:46:46 crc kubenswrapper[4870]: I0130 09:46:46.074804 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:46:46 crc kubenswrapper[4870]: E0130 09:46:46.076860 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d" Jan 30 09:47:01 crc kubenswrapper[4870]: I0130 09:47:01.074655 4870 scope.go:117] "RemoveContainer" containerID="6306a905a69b5913903a55220b56d897c0cf89cc4b73121b61a3a52cb36ff71f" Jan 30 09:47:01 crc kubenswrapper[4870]: E0130 09:47:01.076493 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j4sd8_openshift-machine-config-operator(5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d)\"" pod="openshift-machine-config-operator/machine-config-daemon-j4sd8" podUID="5d3c8db6-cf22-4fb2-ae7c-a3d544473a6d"